Can AI finally fix cybersecurity?
3rd July 2024 • Park Plaza Victoria, London, UK
Does AI help attackers or defenders the most? Which solutions really use it? And what questions should you be asking?
Deconstructing the hype cycle: where is AI the real deal?
The fundamental advantage attackers have over defenders in cyberspace has long been understood: they only have to be right once, while defenders have to be right every time. This problem has been called the “Defender’s Dilemma,” and it lies at the heart of the pessimistic view of cybersecurity which says, “it’s not if but when” and “there are only two kinds of company: those that have been hacked, and those that don’t know that they’ve been hacked.”
The rapidly increased availability of access to sophisticated artificial intelligence tools is clearly going to change the calculus in cybersecurity – but to whose advantage?
One Big Tech CISO has recently published a paper saying that AI will finally allow defenders to reverse their “Dilemma” and “tilt the scales of cyberspace to give defenders a decisive advantage over attackers [enabling] us to effectively cope with the complexity of our digital world and can help turn every organisation into a competent defender.”
However, researchers at an equally large Big Tech monolith point out that timing is everything. They say that the advantages that AI confers on attackers will be decisively negative unless defenders invest now. Their research says that 87% of UK businesses are unprepared for the age of AI due to their vulnerability to cyberattacks. But they also say that organisations that use AI-enabled cybersecurity are twice as resilient to attacks as those that do not, and suffer 20 per cent less costs when successfully attacked.
But these assertions raise a host of complex questions:
What kinds of AI are hackers using and to do what? Do these attacks require AI to detect and repel? Are the use cases for AI in defence different to those for attackers? Where in your technology stack does AI deliver the most value? How do you evaluate new AI solutions and how confident can you be that they will be relevant in three years’ time? What about explainability?
Most importantly perhaps: how does AI help with the commonest and most dangerous threats today – ransomware, third-party security and identity in general.
And what about non-AI solutions: are they obsolete? Are we at a technology cliff-edge requiring huge new investment?