How CISOs can ensure AI is secure-by-design

By Brandon Green, Senior Solutions Architect & Threat Modeling SME, IriusRisk.

  • Friday, 13th September 2024 Posted 3 months ago in by Phil Alsop

AI is no longer science fiction – it is here and is changing the cybersecurity game for businesses globally.

According to government research, one in six UK organisations have embraced at least one AI technology. As businesses become more reliant on AI to support complex tasks in high risk environments such as banks and governments, the pressure is on security teams and developers to ensure the technology is resistant against external threats but also internal flaws.

Chief Information Security Officer’s (CISOs) must implement the right processes and practices to ensure that the AI they develop and use is responsible and secure.

One solution they can apply is threat modeling. This involves analysing software at the design stage of the development process for potential risks and determining the most effective ways to mitigate them.

Assessing vulnerabilities of AI models

Step one of threat modeling AI is conducting a risk assessment to identify any vulnerabilities within the software that could cause problems for the business later. With AI in particular, data processing is a big priority, and CISOs need to ensure that the data that trains the models is of good quality, reliable and secure.

AI models will eat up any information fed to them, including confidential customer data, trade secrets, and other sensitive details that organisations would not want leaked. In fact, one small leak could become a flood of exposed information, as witnessed recently with the massive 26 billion data leak exposing data from Twitter, Dropbox, LinkedIn and others.

Threat actors aren’t just trying to steal data either. Their aim is to hijack the AI model – from injecting malicious training data to exploiting model vulnerabilities. In practice, this means an AI chatbot suddenly starts to spew hate speech or give customers terrible advice. In turn, the reputational damage to a firm could be catastrophic.

But AI does not exist in a vacuum. It relies on a complex web of servers, networks, and data pipelines. Each of these components is a potential entry point for attackers. One small misconfiguration could give hackers the keys to an entire system.

That’s why it is critical to conduct a thorough security audit of the AI infrastructure, as addressing any gaps in these areas can protect data pipelines with encryption in transit and at rest.

It is also vital to conduct regular penetration tests specifically targeting AI systems, simulating various attack types such as data poisoning during training, adversarial attacks on deployed models, and model

extraction attempts. By threat modeling AI systems in this way, developers can identify and strengthen vulnerabilities before they are found and exploited by hackers.

Implementing the right processes

AI, like any other technology, can fail. Without appropriate safeguards in place, businesses can haemorrhage data and money before a human even notices something is wrong. The consequences of facial recognition violating privacy or algorithms perpetuating biases are immense, potentially costing companies millions and eroding public trust in technology. In turn these events can create significant regulatory and legal headaches.

However, CISO’s can manage the responsible and secure development and implementation of AI by putting the right processes in place. Developing and documenting an "AI Emergency Response Plan", detailing fail-safes for each critical AI system to determine what plans an organisation already has in place to deal with system malfunctions, is a good place to start. By integrating explainability tools into the AI pipeline, for example LIME or SHAP, businesses can then understand and interpret predictions of AI and Machine Learning models.

The "black box" nature of many AI systems is a major liability. Maintaining comprehensive logs of model inputs, outputs, and decisions makes it easier to refer back and explain why such decisions by AI models were made in the first place – without the stress of having to scramble for answers.

Furthermore, establishing an AI Governance Committee or AI Ethics Board for regular audits and ethical impact assessments will ensure that people keep on top of their AI models and can efficiently make any necessary adjustments.

Regular evaluation of AI systems

As AI technology continues to evolve at pace, what is secure today may not be tomorrow. Therefore, organisations must constantly reassess and update their security measures.

One way to do this is to create an AI Security Operations Center (AI SOC) responsible for ongoing monitoring, assessment, and rapid response to AI-specific threats. Through establishing real-time monitoring of AI system performance and security metrics, and putting in place a process for rapid security patching of AI models and infrastructure, businesses can ensure they have all the necessary measures in place to mitigate possible cyber threats.

Automating threat detection saves firms time and improves overall resilience – the system can detect and suggest ways to respond to suspicious activity with more speed and accuracy than humans alone.

Creating robust AI models for the future of business

Businesses are increasingly developing and implementing AI into their systems, but to avoid massive disruptions, they must ensure AI models are safe and secure against emerging threats.

This is where threat modeling comes in. By assessing vulnerabilities from the get-go as well as ensuring and improving the safety and transparency of AI models, it can provide a real growth opportunity for businesses, saving both time and money. However, cyber threat actors also tapping into AI’s potential makes the technology a real force to reckon with.

As a result, CISO’s must take a proactive – rather than reactive – approach to cybersecurity. They must ensure their organisations have a robust threat model for their AI systems – or risk the consequences of failing to do so.

By Andy Mills, VP of EMEA, Cequence Security.
By Paul Birkett, VP Strategic Portfolio Management at Ricoh Europe.
By Liz Centoni, Chief Customer Experience Officer, Cisco.

The Key Steps to Ensuring DORA Compliance

Posted 5 days ago by Phil Alsop
By Alasdair Anderson, VP of EMEA at Protegrity.

Three key steps on your sovereign cloud journey

Posted 5 days ago by Phil Alsop
By Martin Hosken, Field CTO, Cloud Providers, Broadcom.
By Darron Antill, CEO, Device Authority.
By Peter Hayles, Product Marketing Manager HDD at Western Digital.

Storage Trends for 2025

Posted 1 week ago by Phil Alsop
By Eric Herzog, Chief Marketing Officer, Infinidat.