AI in adversarial hands: The evolving face of digital security risks

By Vidya Shankaran, Field CTO – Emerging Technologies, Commvault.

  • Wednesday, 24th January 2024 Posted 10 months ago in by Phil Alsop

It’s back to business as usual for organisations following the recent AI Safety Summit in UK’s Bletchley Park, which brought together AI experts from the world over. While there was a mix of positive predictions and dire warnings expressed during the event, in essence, attendees agreed to foster the benefits of AI and deter misuse, although that’s easier said than done. After the Declaration by all 30 countries, participants committed to taking part in six monthly virtual summits between annual conferences, and the UK government commissioned a ‘State of the Science’ Report.

What’s more, European Union negotiators recently signed a landmark deal on the regulation of AI, legislation of which is not expected to take place until at least 2025. Within it, the proposals contain safeguards around the use of AI within the EU, including rules around systems like ChatGPT and facial recognition.

Clearly, AI is signalling the birth of a new era of technological innovation and advancement; its usage and application is being both heralded and scrutinised closely. Who is using it, how is it being used and for what purpose are all key questions facing governments and corporations today.

The upshot for IT security teams is building their cyber resilience in the face of how cyber threat actors are weaponising AI. For most, trying to stay one step ahead of malicious actors is nothing new. But what is different, and cause for concern, is how AI is levelling up the cyberthreat landscape playing field.

A new league of attackers

What used to take malicious actors considerable human effort, knowledge, and technical expertise can now be handled more efficiently and effectively by LLM (large language models) and AI. Take, for example, a plot to compromise an organisation’s supply chain.

AI can simplify and speed up the orchestration of such an attack by automating and coordinating stages of the process that used to be carried out manually. In the planning stage, AI tools analyse vast amounts of data to identify any potential vulnerabilities within the supply chain. It can then devise ways of taking advantage of the most susceptible entry points based on the specific weaknesses detected.

Next, to help in the execution of the attack, LLMs like ChatGPT can be instructed to create phishing emails and social posts. Drawing on a vast knowledge base, the content is crafted with credible supply chain references and information relevant to the recipient’s role and interests, making it much more difficult for busy employees to recognise the real underlying intent. These communications are often far more convincing than those written in the past. Even the wariest could be tempted to click on a suspicious link or inadvertently give away pieces of information that could help a criminal.

Once an attack is underway, AI-driven automation can optimise the delivery of malicious payloads across the supply chain and ecosystem, choosing the most opportune moments and routes to evade detection by cyber defences.

This capability to process huge datasets, learn from patterns, and make dynamic decisions can make attacks appear highly complex, wide-ranging, and potentially more dangerous than they are. Perhaps implying that a sophisticated criminal network is behind them. However, supported by AI tools,

relative newcomers can produce comprehensive attacks, ostensibly putting them on a par with seasoned individuals and larger, more experienced outfits.

A game of two halves

Organisations need to be ready to cope with this growing number of smarter attacks and determine their criticality quickly, otherwise they run the risk of either overreacting or missing the early warning signs of a potential breach. This requires leveraging AI to beat criminals at their own game.

With the boot on the other foot, AI tools can be deployed to analyse vast amounts of threat intelligence from multiple sources in real-time, identifying hidden indicators of cyber threats. They can monitor the behaviour of users and entities comparing against benchmarks of normal activity and pinpoint anomalies that indicate a compromise. Assessment of historical data can anticipate potential threats, enabling faster and more accurate assessment of future risks. Additionally, ongoing monitoring of incoming threat data ensures that cyber defences can be swiftly adapted to combat emerging threats and tactics.

AI plays a pivotal role in endpoint protection too, deploying algorithms that recognise and neutralise malware and ransomware, including previously unseen variants. Coupled with AI tools specialising in phishing detection, malicious communications can be differentiated from legitimate and quarantined for further investigation, cutting down the risk of recipients clicking on suspicious links.

When an incident or breach does occur, AI-driven automation can aid the investigation and mitigation process, expediting remediation to minimise further damage.

Plugging the skills gap

Today’s security platforms can deliver a comprehensive range of AI-powered protection and response measures to beat attackers, including early warning, threat detection, incident readiness, rapid response, and cyber recovery. Given the ongoing shortage of cybersecurity staff, these solutions can also help fill any gaps with always-on defences.

Augmented by human intelligence, AI’s adaptive nature will keep security teams in the same league as their criminal opponents, able to compete on an equal footing and ensuring their cyber resilience remains high. Organisations that fail to keep up with the adoption of AI are giving attackers a significant head start and will be at much greater risk of serious breaches. Let’s not let cybercrime get the upper hand in AI.