The impact of AI on cyberattacks is often viewed through a simple lens: if AI automates some aspect of cyberattacks, obviously there will be many more successful attacks.
This is unrealistic. Like any system involving multiple participants with competing interests, cybersecurity exists in a complex equilibrium. As a result, automation of a particular capability might not lead to a large increase in successful attacks. For instance, some attackers might be limited by the fear of retaliation, rather than an inability to scale up their operations.
To understand how advances in AI capabilities will impact the lived experience of cyberattacks, it’s important to understand this equilibrium. On this page, we review the important mechanisms of the cybersecurity equilibrium, some asymmetries between attack and defense, and implications for how various AI capabilities might tip the balance. We conclude with a simple framework for estimating the impact of AI progress on the volume and impact of cyberattacks.
This section lists mechanisms that tend to reduce the impact of an advance in attacker capabilities. In general, an advance will still have some impact, but it could be small or large, depending on how strongly these mechanisms apply.
Suppose that someone wants to break into a certain organization. They identify employees of that organization, and send carefully crafted scam emails (spear phishing). If AI allows them to write more emails, they could use this time to seek out additional employees to target. However, they will find it progressively harder to find good candidates (e.g. people whose can be clearly identified as employees and whose social media presence contains information useful in crafting a plausible email).
In economics, this is known as a diminishing marginal return: as you do more of something, you may get progressively less return for additional investment. In cybersecurity, another example might be scouting for vulnerable devices to incorporate into a botnet: eventually you’ll run out of easily-compromised targets.
Many steps in a cybersecurity attack may suffer from diminishing marginal returns. If a particular step is subject to significant diminishing marginal returns – if the rate of attacks is sitting at a steep point on the curve for that attack step – then we will say that it is a rate-limiting step.
Any successful cyberattack entails multiple steps. For instance, a partial list of steps for even a simple phishing attack might include identifying targets, crafting emails, stealing credentials, employing those credentials to gain initial access to some system, exploiting that access (likely encompassing multiple steps), and money laundering of the proceeds.
If automation of one step allows an attacker to carry out more attacks, then other, non-automated steps may become a limiting factor. This can be especially true if diminishing returns kick in. For instance, it might become harder to find new targets, or to hire skilled staff to exploit stolen credentials.
Some steps are already “easy” for attackers, i.e. cheap to perform and easy to scale. Automating these steps will have little impact.
As one panelist noted, this can be especially true for well-resourced nation state attackers:
It is worth mentioning that these actors are able to execute [attacks on infrastructure] successfully at little cost without any assistance from AI. Espionage and nation state actors are generally only limited by the intelligence they need to successfully execute these operations. They are not limited by technical capabilities in the same way an opportunistic actor such as Ransomware group might be.
Each attempted attack poses a risk of the defender noticing and then acting to prevent that form of attack from being used in the future. This limits the incentive for an attacker to scale up their operations.