Logo

Smart Shields, Smarter Threats: The Dual-Use Dilemma of AI in Cybersecurity

By David Hood, CEO, ANSecurity.

  • Wednesday, 5th November 2025 Posted 2 hours ago in by Phil Alsop

It’s an exciting time in enterprise technology. Yet before we all get too excited, we should remember that technology is a fundamentally morally neutral thing. It can be used for both good and bad purposes and noble and malicious ends. AI is just another example in a long line of them. 

AI has seemingly made its way into almost every platform, tool and product currently on offer within cybersecurity. Many are certainly bullish about its potential effect on cybersecurity: As of the beginning of this year, 55% of organisations had already implemented some form of AI-based cybersecurity.

Yet security is often a game of escalation. As defenders improve, so do attackers and for every technology that improves security, cybercriminals will find ways to use it or exploit it to their own ends. For as much as AI might revolutionise cybersecurity in the future, it is already revolutionising cybercriminality.

How cybercriminals are now using AI

AI has had two principal effects on cybercriminals. Firstly, it’s democratised cybercriminality. Generative AI tools - such as code assistants and even ChatGPT - have allowed people to learn how to perform cyberattacks, perform quick and easy reconnaissance on potential targets and even generate code for attack scripts. The same is true of fraud where generative AI has allowed inexperienced and untrained fraudsters to craft ever more convincing phishing emails and assume false and stolen identities with deepfakes and AI generated documents.

The barrier to entry has then been massively lowered for both cybercriminals and fraudsters. In the same way that Generative AI has produced adolescent tech millionaires overnight from “vibe coded” apps, generative AI has allowed budding criminals to effectively launch attacks and pull off scams without even knowing how to code. 

For more advanced actors - it has supercharged their capabilities: The use of automated frameworks has allowed them to upscale their attacks massively; they’re using AI to discover vulnerabilities in targeted entities; scraping data to perform fast and thorough reconnaissance and optimising their ability to crack passwords; becoming ever stealthier in their attacks and even using AI botnets to decide when the most opportune moment to attack is. On top of all that, they're using AI tools to steal and fabricate identities to trick identity verification systems, perform fraudulent transactions with banks and even use them to imitate voices and faces to carry out deepfake fraud and vishing scams.

Examples of these new possibilities abound. BlackMatter ransomware, for example, is now using AI to improve their encryption strategies and avoid detection tools.  In 2021, criminals used AI tools to clone the voice of a company director. Using that fraudulent voice they successfully conned a UAE-based bank into transferring $35 million into their accounts. In 2023, University of Indiana researchers found a network of bots - dubbed fox8 - which used ChatGPT to generate crypto-focused spam and misinformation designed to defraud X users. Perhaps most importantly, however, these were not spotted by X’s bot detectors or LLM content detectors. 

Cybercriminal AI has grown as a shadowy reflection of the legitimate world with an ecosystem of tools, services and heated debates around what it means for their sector. The result is that AI-generated attacks have seemingly spiked. In fact, according to SoSafe’s Cybercrime Trends 2025 report, 87% of global organisations faced an AI-based cyberattack in the last year. A Sentinel One report says that phishing has grown by 1265% due to Generative AI. CrowdStrike reports that Voice Phishing attacks (Vishing) were up 442% between the first and second halves of 2024.

A shadowy reflection

Much as in software development, there is now an ecosystem - and growing economy - of tools and services that have proliferated in cybercriminality. Some of those are just the publicly available AI tools that anyone can use, such as ChatGPT. Others are jailbroken generative AI services, so-called “Dark LLMs,” that can provide information and perform actions which would normally violate the ethical guardrails. These are now on open sale on the dark web, as criminal vendors take models from Anthropic, xAI and Mistral and others, jailbreak them and then rent them to cybercriminals. 

Anthropic recently published research detailing how cybercriminals used their AI tool - Claude - to perpetrate a wide spanning campaign. The authors wrote: “The actor demonstrated unprecedented integration of artificial intelligence throughout their attack lifecycle, with Claude Code supporting reconnaissance, exploitation, lateral movement, and data exfiltration.” In fact, this particular threat actor used Claude to compromise and extort 17 targets in government, healthcare, emergency services and even religious institutions.

Claude allowed them to orchestrate and scale their campaign: Automating reconnaissance, harvesting and tracking credentials, told them how best to penetrate targeted networks, which data to take, crafted extortion strategies based on the specificities of each target, determined appropriate ransom amounts and even generated ransom messages tailored to each target. Perhaps most importantly, Claude adapted to target defences in real time, lending them an agility that their targets lacked. Claude was used at seemingly every step of the campaign to great success and it's likely just one example of what we’ll see in the future. Indeed, the report concludes, “we expect this model to become increasingly common as AI lowers the barrier to entry for sophisticated cybercrime operations.”

AI Defence: useful, not decisive

AI will not be a silver bullet here and the benefits that AI brings to cybersecurity won’t necessarily help stop AI-based attacks. Defenders - whether they use AI cybersecurity tools or not - don’t feel that they’re equipped to defend against these attacks: Darktrace’s 2025 AI Cybersecurity report noted that 45% don’t feel adequate to the threat of AI cyberattacks.

The diversity of tactics and possibilities malicious AI has unlocked means that resisting won’t simply be about bolting on an AI-based defence tool. That will help, but like any tool, its value depends on how, why and by whom it is used. 

AI-cybersecurity will likely find its most useful aspect in hyper-specialised use cases, improvements on pre-existing technologies and embedded within hardware “at the edge.” Neural Processing Units (NPUs), for example, are chips which can accelerate AI/ML inference workloads on devices, as opposed to via the cloud. This means that many security functions that might normally happen at the network level could happen on-device. This could grant every laptop and desktop computer its own AI-firewall, permit on-device behavioural analysis and local anomaly detection and then carry out immediate prevention and isolation when threats are detected. This could mean a real hardening of endpoints the world over. 

AI will have positive effects in many industries, but we can’t just treat them as unalloyed goods. Risks abound within the legitimate AI tools that legal businesses use, and we should remember that every technology can and will be used for both good and ill. In the end - in cybersecurity and everywhere else - if we want to seize the benefits of AI, we’ll also have to factor in the threats it poses too.

By Neil Roseman, CEO, Invicti.

Put a price on security with Value at Risk

Posted 3 days ago by Phil Alsop
By Matt Middleton-Leal, Managing Director Northern Europe, Qualys.

Beware of Tech Monocultures

Posted 4 days ago by Phil Alsop
By Sachin Agrawal, Managing Director for Zoho UK.
By David Higgins, Senior Director - Field Technology Office, CyberArk.
By Alexander Gittens, Utilities, Energy and Enterprise Sales Manager, Getac.ise Sales Manager, Getac.
By Haydn Brooks, CEO of Risk Ledger.
By Giuseppe Leto, senior director IT systems business at Vertiv.

The hidden cost of ‘Shadow AI’

Posted 2 weeks ago by Phil Alsop
By Caroline Fanning, Chief Employee Success Officer at The Access Group.