The AI security
Understand the future
Our perspective | Anticipation serie | August 2024
© 2024 Cynergence
singularity
The transformative impact of AI on cybersecurity
We proudly live in Queen City and drive our business from Charlotte, North Carolina.
AI technology is quickly becoming more integrated into businesses, leading to a transformation in how organizations operate. This transformation is still in its early stages and is expected to significantly change the cybersecurity domain as we know it. Chief Information Security Officers (CISOs) and senior leaders need to anticipate this evolution and be prepared for the transformative impact of AI.
In brief
-
AI will transform cybersecurity from governance to operations, directly affecting organizational structure, human resources, and processes.
-
The synchronicity between AI and cybersecurity should drastically improve the organization's security posture.
-
Chief information security officers, senior executives, and board directors need a clear vision of how AI will impact security in the short, mid, and long term in order to define and lead the next-generation cybersecurity models.
10 minutes to read
Author
Frederic Georgel
Primary audience
CEO,CFO, CTO, CIO, CISO, CHRO, CDO, COO, CLO, BD
Other audience
Students, Researchers, Press,
Shareholders, Investors
#artificialintelligence #AI
#cycle #transformation #cybersecurity #airisk
Introduction
This study series does not aim to analyze AI's state-of-the-art but rather to explore the potential future of cybersecurity with AI. Based on our research and observations, our forecast should be accurate between 55% and 75 %.
The rapid evolution of AI is poised to revolutionize cybersecurity. Over the years, it will reshape the core of the CISO organization and transform the roles of technology leaders. To lead in this transformation, senior executives must understand AI's evolution in cyber defense and its potential impacts during the next evolution cycles.
Artificial Intelligence is not new, but in 2022, after a long period of selflessness called the AI Winter, AI came back with Generative AI and initiated a massive transformation in the world of technology.
AI is fundamentally the same as other IT technologies. It requires Data, Processing, and Computing to work, but AI has specific characteristics:
-
AI uses a large amount of data and parameters (e.g., Chat GPT 5 uses around 1.5 trillion parameters).
-
AI needs extensive computational resources.
-
AI has specific features that include evolutionary computing, computer vision, machine learning, neural networks, deep learning, and natural language processing (NLP).
Organizations must have a large volume of data to benefit significantly from AI, but data quality is key to success.
Tech companies that collect a large amount of data, such as Meta, Google, and Apple, have a clear advantage over other organizations.
The immense popularity of ChatGPT has created the false perception that AI is a centralized model. It is not.
Organizations will face a proliferation of different AI solutions for each of their current platforms. Solutions such as SAP, Oracle, Workday, Salesforce, Microsoft, Adobe, etc. currently deploy AI inside their own solutions to support specific functions. This situation will affect the entire organization, and the cybersecurity ecosystem will not be spared. At the end of the Co-pilot cycle, cybersecurity teams could use more than 20 embedded AIs to Co-pilot security activities.
This illustration shows the cybersecurity areas where AI technology will most likely be deployed in the next 5 to 10 years.
From a security perspective, it’s essential to remember that cybersecurity is an ecosystem that involves, at minimum, the following functions.
-
Risk management
-
Finance
-
Human resource
-
Technology operations
-
Technology design and architecture
-
Technology security
-
Privacy management
-
Business operations
-
Enterprise management
The AIs supporting these functions must work in synchronicity to enhance the organization's technology resistance and resilience against cyber threats.
Organizations should remain vigilant due to the rapid spread of various AI types and generations, which could create new silos between business functions and lead to a loss of control.
Understand artificial intelligence cycles and their impacts
The following analysis explores four major cycles.
Cycle 1: Predict
Description
The significant advancements in deep learning, which began around 2012, have transformed machine learning capabilities. Deep learning enables machines to learn from vast datasets and make accurate predictions without explicit programming. In cybersecurity, deep learning is extensively applied in domains such as Intrusion Detection Systems (IDS), Malware Detection, Phishing Detection, User Behavior Analytics, and event and pattern identification.
Social Impact
This cycle has promoted the creation of new positions, such as data engineer, data analyst, data scientist, machine learning engineer, etc. This cycle had a positive impact on the workforce by creating more jobs. Deep learning has also triggered the emergence of facial recognition. In 2017, Apple introduced FaceID, which the population has positively accepted. However, several countries have used facial recognition to monitor and control their populations, a practice that has been largely condemned.
Cybersecurity Impact
Deep learning has not particularly changed the cyber team's organizational structure but significantly improved the security posture. This technology is usually embedded in vendor solutions. Deep learning has positively improved the capability to detect threats.
Cycle 2: Co-pilot (current cycle)
Description
This cycle began in 2021 with the introduction of GitHub Co-pilot, a web-based platform for version control and collaborative software development. The concept of Co-piloting has received strong industry recognition to the extent that the current AI cycle has been named "Co-pilot." AI Co-pilot solutions aim to assist users in various tasks by providing real-time suggestions. For instance, AI Co-pilot aids security professionals by highlighting suspicious activities or vulnerabilities, supports architects and engineers in implementing a zero trust model, optimizes the deployment and operation of Security Orchestration Automation and Response (SOAR) systems, and facilitates communication and collaboration between different teams.
Social Impact
The Co-pilot cycle is well accepted by the workforce, which generally appreciates working with a virtual intelligent assistant. This cycle also opens many new career opportunities. Unfortunately, all of these benefits must be weighed against the negative effects it produces, particularly because generative AI enables the production of artificial voices and videos that can be weaponized, like deep fakes, by bad actors or nations to influence political opinions and create social disruption.
Cybersecurity Impact
Generative AI and Co-pilot trigger several changes in the cyber team profile because they require more coding capability (developers), more data (data analysts, engineers, and scientists), and AI cybersecurity specialists.
Co-pilot helps to improve cybersecurity in almost all of its domains. The three main groups that will have a direct benefit are :
-
Security Operations (detect and respond)
-
Security Engineering
-
Security Application and Coding
The other cyber teams will progressively benefit from AI in the last years of the Co-pilot cycle.
Co-pilot generates two major side effects:
-
The introduction of AI Co-pilot has started to impact entry-level security analyst positions and may lead to their gradual replacement in the coming years. The extent of this impact on other positions will depend on the functionality and capabilities offered by the AI Co-pilot in the near future. Additionally, the shortage of cybersecurity talent may further drive the adoption of AI and potentially expedite the transition to the next phase (Pilot cycle).
-
Malicious actors may use Generative AI and Co-pilot to enhance and refine cyber attack capabilities. This is particularly prevalent in phishing attacks and social engineering, where deep fake voice and video can be utilized. It is important to note that voice authentication can no longer be relied upon as a valid security measure.
Cycle 3: Pilot
Description
This cycle will likely begin sometime between 2028 and 2030. "Pilot" represents the natural evolution of Co-pilot. It is based on the concept of autonomous decision and action. AI Pilots will take actions based on acquired knowledge (internal and external), available tools, the nature of the challenge (threat or incident), and the policy that drives the decision. With AI Pilots, organizations will be able to detect potential attacks at the reconnaissance stage—something extremely difficult to achieve manually or with traditional automation—and will take more rapid and preventive actions.
The advantages of AI Pilots include faster response times, enhanced cybersecurity capabilities, and cost reduction. However, the Pilot cycle represents a significant shift in decision-making, as leaders and managers will cede the control of operational decisions and actions to an Artificial Intelligence.
Social Impact
Even if the technology can support the deployment of the AI Pilot, its social acceptability could shift from a positive to a negative perception if the benefits are not well demonstrated. This situation could be challenging for a vast majority of organizations and countries around the world. The main reasons for this shift are:
-
The evolution of AI will significantly impact the global workforce. By the end of the Pilot cycle, companies may have reduced their workforce by 15 to 30%. Unfortunately, the demand for human resources in the AI industry will not be enough to directly compensate for the loss of jobs.
-
The AI's capability to take specific action without human approval will trigger fears that will transform into opposition and political reactions during the Pilot cycle and can affect the next one.
On the other hand, small and medium-sized companies could greatly benefit from this evolution. They will be able to do more with limited resources. It could also reduce public service costs and improve service access and quality for the population.
Based on our research and analysis, from a business perspective, the benefits will outweigh the drawbacks, and the transformation triggered by the Pilot cycle will proceed despite social turbulence.
Cybersecurity Impact
The Pilot cycle will significantly impact the cybersecurity industry. Professionals may be exposed to significant workforce reductions, particularly in security operations and security assurance (controls).
In this cycle, AI will autonomously pilot:
-
Anomaly Detection
-
Incident response
-
Forensic
-
System Recovery
-
Security Control implementation, operationalization, and monitoring
The Pilot cycle will change the cybersecurity organizational structure, role, and responsibility. The CISO will have to build a cybersecurity organization that supports new functions.
-
Governance will have to define Artificial Intelligence's policy and scope of action (WHY, WHAT, AND WHEN).
-
The engineers, developers, and data scientists will have to train the AI to act effectively (HOW and WHERE).
Cybersecurity will not be the only function affected by this new cycle. All functions involved in the cybersecurity ecosystem will experience essential changes.
For example:
-
HR could have to define and operationalize dedicated onboarding and offboarding processes for AI like any other digital identity.
-
The audit function will be responsible for verifying the security posture and the actions taken by AI during an incident.
-
The security governance function will have to define Accountability when Artificial Intelligence takes actions based on defined policies and AI decisions based on learning.
Interoperability between different AIs could be a significant technical and legal challenge. We anticipate that AI providers will be reluctant to be exposed to situations where one AI can learn from another AI. The CAGI (Collaborative Artificial General Intelligence), an emergent concept introduced by Rotem Alaluf in 2024, aims to address this issue by implementing guardrails that will allow multiple AIs to collaborate together. The AI industry needs to define and standardize the rules to manage that type of situation, and CAGI could be a strong basis for discussion. Organizations must be aware of this type of risk before moving forward.
During the Pilot cycle, malicious actors will utilize AI to continue enhancing cyber attacks; Precision and speed will be AI's primary advantages.
Cycle 4: Manage
Description
The end of the Pilot cycle will be triggered by a significant evolution known as the Singularity. This evolution is predicted to usher in a new phase, which is expected to occur between 2038 and 2040, although political and social factors will likely influence this timeline. Singularity marks the moment when artificial intelligence (AI) surpasses human intelligence, allowing AI systems to enhance themselves independently. This will lead to rapid technological progress and improved management abilities.
If creativity, research, and strategy should continue to be managed by humans with AI support, tactics and execution will likely be transferred to AI.
In the Manage cycle, artificial intelligence (AI) will increasingly handle business functions independently, following established governance policies and strategies. The knowledge and experience gained by the governance team in the previous cycle (Pilot) will be essential to succeed in this cycle. AI orchestrators are anticipated to have a pivotal role in managing and coordinating all AIs used across different business functions such as HR, Finance, Risk, Audit, Procurement, etc. The role of AI orchestrators will be fundamental and should be considered highly critical by organizations.
The Singularity cycle should also coincide with robotic and artificial intelligence fusion.
Social Impact
The Manage cycle will profoundly change society and the economy. Due to its impact on the workforce in all business sectors, the social acceptability of this evolution will be challenging if not properly anticipated and regulated.
Many organizations and countries will experience significant political and social resistance. The main reasons for this opposition are:
-
This evolution will significantly impact the workforce across the world
-
The AI's ability to manage business activities autonomously will continue to trigger fears and political reactions that include regulations, limitations, and potential bans.
Conversely, the benefits for the population could be significant on several fronts. Healthcare, environmental sustainability, quality of life, social equity, and inclusion will be positively impacted during this cycle and will generate direct benefits for the population.
Cybersecurity Impact
For cybersecurity, this upcoming phase will allow AI to manage cybersecurity at tactical and operational levels autonomously. This means the AI can decide and acquire the resources it needs, develop code, build and configure networks and systems, proactively detect and respond to threats, manage compliance with all applicable laws and regulations, maintain and improve cybersecurity resilience against advanced threats, design, implement, and manage security measures and controls autonomously.
This cycle will have a deep impact on cybersecurity organizations that must be clearly understood and anticipated. All positions within the cybersecurity organization, as we know them in 2024, will undergo significant changes, and many of them will be totally disbanded. This situation is not new. Throughout the history of computing, several jobs and functions have completely disappeared due to technological advancements (such as Keypunch Operator, Typist, Tape Librarian, etc.) while new ones have been created. The extent of the organizational transformation during this cycle will be significant for IT and cybersecurity teams.
From a business perspective, this cycle will bring major benefits, such as significant cost reduction and improvement in technology resistance and resilience against cyber threats. Still, it also means that organizations will progressively lose some of the cybersecurity knowledge and expertise acquired during the previous decades and partially or totally depend on their AIs to protect them.
With ubiquitous access to AI technologies, the gaps in cybersecurity capabilities between large and small organizations are expected to be significantly reduced during this cycle. This is a major improvement, considering that small and medium-sized businesses (SMBs) generate about 44% of the economic activity in the United States.
It's important to note that all these cycles last 9 to 10 years, aligning with previous technology transformation cycles.
AI does not accelerate the transformation timeframe, even though its initial adoption speed has been unprecedented.
During 4 cycles, the technology leader position will be progressively consolidated and reduced.
The transition of the cybersecurity functions to AI is expected as follows:
Humans will continue to oversee the security governance function even at the end of the "Manage" cycle because it will play a crucial role in establishing the policies that shape and frame AI's decisions and actions.
AI cybersecurity threats and risks
New cyber threat
Even if AI creates specific business and societal risks, Cynergence considers that AI will not directly introduce new specific cyber threats during the current and the next cycles.
Existing cyber threat
Several threats well-known to the cybersecurity community will use artificial intelligence to enhance their attack strength and extend their potential impact on systems and information used by the organizations. We can already identify:
-
Social engineering
-
Deepfake (Voice and Video) can compromise the voice authentication mechanism and amplify the success rate of phishing and social engineering attacks.
-
Risk Probability: High
-
-
-
Ransomware and malware attack
-
AI can improve malware design to make it less detectable and more effective (polymorphic code). It can also adapt malware behavior in real time to bypass security measures.
-
Risk Probability: High to Very High (Pilot & Manage cycles). The risk is deeply linked to the bad actors' ability to leverage AI technology.
-
-
-
Attack orchestration
-
AI can improve attack strategies, orchestrate intensive attacks, and improve execution speed.
-
Risk Probability: High to Very High (Pilote & Manage cycles). The risk is deeply linked to the bad actors' ability to leverage AI technology.
-
-
-
Privacy and confidentiality
-
The data used to train an AI and the outcome ignore privacy and confidentiality.
-
Some anonymization techniques can compromise the data quality used to train AI models. The lack of regulation in several countries allows international companies to use private and confidential information to train AI intensively in those regions. In these cases, the issue lies not with AI itself but with data management ethics. Privacy regulations are ineffective for AI if they are only local and lack data traceability.
-
-
-
Financial fraud
-
AI can improve financial fraud schemes by implementing highly complex mechanisms to steal, move, and hide money. Banks, investors, and countries' revenue services will be targeted.
-
Risk Probability: High.
-
-
AI risk exposure
-
Poisoned AI inputs
Yes, it is possible to poison AI inputs despite the large set of data. Attackers can inject corrupted data into training datasets, which can skew the AI's learning process. Researchers demonstrated the feasibility of this attack in 2017. The probability of this attack remains moderate. There are two major reasons for that :
- Preparing data to train an AI requires extreme attention. Data scientists have multiple processes in place to ensure the quality of the data. Current database security mechanisms can detect a large amount of corrupted data.
- Second, the learning process includes anomaly detection (deviation from what is normal or expected) in observed behavior, such as bias or hallucinations. A large amount of corrupt data will automatically trigger the detection of unexpected outcomes, stopping the learning process.
-
Prompt injection
Yes, it is possible to re-educate or influence an AI by injecting many false answers. The mitigation of this risk is well known and includes mechanisms like :
- Input validation to filter and sanitize user inputs.
- Context-aware filtering to detect and block suspicious or harmful input patterns.
- AI response monitoring to continuously analyze AI outcomes and detect unexpected or harmful behavior.
- User authentication to limit access and trace interactions back to individual users are the most effective ways to manage this risk.
-
Hallucination
-
Hallucination risk is when AI generates false or misleading information due to data or model errors. This can lead to unreliable outputs. This can increase the false positives of the detection mechanism and an inappropriate reaction during an incident.
-
Risk Probability: Medium to High. Manufacturers of defensive technologies know the risk and deploy mechanisms to reduce it to a minimum. Organizations that use poor internal data quality and poor processes are exposed to a significant increase in risk.
-
-
-
Bias
-
Bias has a similar root cause of hallucination. The outputs are not false but are oriented in a certain direction.
-
Risk Probability: High. Based on its specific business context and culture, companies' AI policies can influence AI's bias. In cybersecurity, bias could be considered a positive risk in certain cases because the AI could react to an attack with a controlled bias based on the company policy. Bias should be considered a negative risk if it generates inappropriate and uncontrolled outcomes.
-
-
AI-driven cybersecurity transformation practice
The introduction of generative AI will change the structure and practice of cybersecurity during the current and next cycles.
With the advent of AI, the CIA triad, which has long-defined cybersecurity practices, will undergo a critical change. This represents the most significant evolution in cybersecurity over the past 44 years. CISOs must understand this essential transformation to lead artificial intelligence security.
This means that the well-known security CIA triad will become a tetrad that includes a new pillar named Accuracy.
The introduction of the Accuracy pillar is a direct consequence of AI's recent development.
Cybersecurity should be responsible for identifying and mitigating data and information Accuracy risks, as these risks can materialize and cause significant losses for the organization.
New terminology associated with the Accuracy pillar.
-
Data Accuracy (DA) pertains to the input and output data used by an AI during the training and operation phases.
-
Information Accuracy (IA) to the input and output Information used by an AI during the training and operation phases.
-
Decision Accuracy (DA) refers to the accuracy of an AI decision during the certification and operation phases.
-
Action Accuracy (AA) refers to the accuracy of an AI action during the training and operation phases.
-
DIDA (Data, Information, Decision, Action) refers to the scope of security Accuracy
Accuracy differs from integrity, as data or information can have strong integrity but still lack accuracy from an AI perspective.
Dedicated controls should be designed and implemented in all functions associated with NIST CSF 2.
-
Govern DIDA Accuracy
-
Accuracy governance must manage the accuracy threshold, role and responsibility, and risk mitigation mechanism.
-
-
Identify DIDA Accuracy
-
Identify, classify, and inventory AI(s) used by the organization and their owner(s).
-
Identify, classify, and inventory Data, Information, Decisions, and Actions related to each AIs used by the organization.
-
Identify the accuracy expected of Data, Information, Decision, and Action (DIDA).
-
-
Protect DIDA Accuracy
-
Mechanisms to protect the accuracy of Data, Information, Decisions, and Actions (DIDA) related to AI must be defined, deployed, and monitored.
-
-
Detect DIDA Accuracy compromission
-
Mechanisms to detect accuracy corruption of Data, Information, Decisions, and Actions (DIDA) related to AI must be defined and deployed.
-
-
Response DIDA Accuracy compromission
-
Mechanisms to identify, contain, eradicate, and restore the accuracy of Data, Information, Decisions, and Actions (DIDA) must be defined and deployed.
-
It's important to note that the concept of accuracy shouldn't be confined to Artificial Intelligence. Accuracy controls are also important in other areas, such as metrics, indicators, signals, attributes, risk assessment, audits, reports, etc.
Accuracy should also be a key factor in assessing the effectiveness of security controls by evaluating the accuracy of the metrics used by a control.
How can a successful AI cybersecurity transition be performed ?
To succeed with AI, organizations must first implement a proper AI management structure. Since AI is a technology, the Chief Artificial Intelligence Officer (CAIO) should report directly to the Chief Technology Officer (CTO).
Typical technology organization during the Co-Pilot and Pilot cycles
To succeed in the current and next AI cycles, the CISO needs to onboard data scientists and cybersecurity AI experts and collaborate intensively with the CTO and CAIO.
Cybersecurity's transition to AI should be based on a long-term roadmap with yearly milestones that create progressive AI foundations in each major cycle of evolution. The CISO must define the AI security vision and share it with the board and key business leaders to obtain the required support of the executives.
The fundamental elements to manage AI Security successfully:
-
Security data must be gathered and prepared to be ingested by the AI's Security tools chosen to enhance the organization's defense capabilities.
-
Accuracy is a new pillar of the cybersecurity practice alongside Confidentiality, Integrity, and Availability. Accuracy control must be deployed before the end of the current cycle (Co-pilot).
-
A new security governance practice must be defined and deployed to frame and control AI decisions and actions (What, When, Where, Why, How).
-
The introduction of an AI orchestrator to manage and coordinate all AIs used by cybersecurity will be critical.
-
Interoperability between AIs must be clarified at legal and technical levels.
-
Synchronicity between AI and cybersecurity must be permanent.
During the Co-pilot cycle, the CISO focus must be :
-
All data and information used by cybersecurity must be identified, classified, and standardized for AI training purposes.
-
All AIs deployed in the organization should be inventoried and classified.
-
Controls dedicated to the Accuracy pillar (this cycle's scope only includes Data and Information) must be designed and deployed.
-
The cybersecurity team must include data scientists, engineers, analysts, AI security experts, and back-end developers.
-
The security governance team must be trained to write AI security policies that frame the AI behavior (decisions and Actions)
-
Deep collaboration between CISO, CTO, CDO, and CAIO, must be promoted.
-
EU Artificial Intelligence Act (AI Act) enters into force on August 1, 2024, and will apply by the following deadlines:
-
February 2, 2025: Prohibitions on AI systems presenting unacceptable risks take effect.
-
August 2, 2025: Rules for GPAI models come into force.
-
August 2, 2026: Most of the EU AI Act rules will apply.
-
-
Use the Co-pilot cycle to create the next cycle's foundation.
During the Pilot cycle, the CISO focus must be:
-
Consolidate the security team progressively.
-
Design and deploy controls dedicated to the Accuracy pillar (scope: Data, Information, Decision, and Action).
-
Implement AIs with action capabilities first in cybersecurity operations to improve detection and response velocity.
-
Consolidate AIs by security function.
-
Deploy a security AI orchestrator and define and deploy a specific set of controls to protect it.
-
Deploy AI security policies that frame AI actions and identify accountability.
-
Verify the accuracy of controls effectiveness (scope: DIDA).
For effective AI security, tools must:
- Be specifically trained by the manufacturer for a particular function (e.g., event detection).
- Allow the cybersecurity data set to be ingested to consider the organization's context.
- Maintain permanent synchronicity between AI and cybersecurity.
- Do not expose the organization's security data and information.
Conclusion
Artificial Intelligence will significantly transform cybersecurity. To prepare for this change, CISOs must focus on key milestones that will be integrated into security programs over the next few years. The success of this transformation depends on a clear roadmap and strong collaboration between tech and business leaders. Organizations not prepared for the inevitable AI security transition will likely face overspending and difficulty in facing the next generation of cyber attacks.
How can we help ?
Cynergence has pioneered a distinctive approach to assisting senior cybersecurity leaders with coaching, training, mentoring, and advisory services. This approach, known as AWAKEN, is spearheaded by a trusted advisor (CG) who provides support to the Chief Information Security Officer (CISO) in addressing the particular challenges posed by Artificial Intelligence (AI) in cybersecurity, such as the Accuracy pillar.