Before adopting Artificial Intelligence, CISOs must answer the following questions

The use of artificial intelligence (AI) is hugely prevalent in almost all aspects of our day-to-day lives including security. Looking at the security landscape, supervised machine learning (ML) is well established in threat detection but unsupervised ML and deep learning are increasingly popular tools for post breach anomaly detection. By Jeremy D’Hoinne, Research Vice President, Gartner.

  • Tuesday, 5th January 2021 Posted 3 years ago in by Phil Alsop

However, as AI use in security technology becomes even more commonplace, CISOs need to only study its benefits but also the risks of using AI long-term. By 2022, the replacement of conventional security approaches through ML technologies will make organisations less secure by a third.

 

Take a step back

 

There are many myths and misconceptions when it comes to AI and failing to understand this at the heart of a security team could do untold damage.

 

Where can AI be useful to a CISO’s strategy? Most advanced AI algorithms can immediately reveal key information about vulnerabilities, attacks, threats, incidents and responses. Engaging this function in the first instance can result in a noticeable strengthening of a security team’s capabilities. Other tools such as probabilistic reasoning (often generalised as ML) and computational logic (commonly referred to as rule-based systems) can also bring huge benefits. 

 

So what can the risks be? Because many large algorithms consume large amounts of data, CISOs must understand the implications of using an AI product when it comes to data security and privacy. Many mature organisations have jumped on the AI bandwagon without evaluating the tools they already have in use that may be doing a more effective job of security and risk management than a new tool could achieve.

 

ML is not immune to attacks and AI should not be treated as a silver bullet for protection. There is no assurance that AI is better than all alternate techniques and it can be fallible, offering incorrect and incomplete conclusions.

 

CISOs ought to focus on the end-desired outcome when it comes to their security strategy and ask themselves the following questions:

 

-       Do my team and I have a strong understanding of different AI implementations?

-       Do we need AI for the problem we are trying to solve?

-       Are these AI tools designed to address our specific needs?

-       How can we measure that it is worth the investment?

-       What will be the impact of using AI for this?

 

Beyond the hype

 

When AI first entered the technology landscape, we saw exaggerated expectations on what it might be able to do coupled with fears on job security. The idea that AI was going to replace human jobs became rampant. In 2020, this should no longer be a concern – through this year, AI will be a positive job motivator, creating 2.3 million jobs whilst only eliminating 1.8 million.

 

AI has the potential to improve the effectiveness of a security team, however the idea that it can give organisations the ability to predict future attacks is nothing more than a smokescreen.

 

Organisations should find AI at its most effective when humans work with the technology rather than relying on it to replace human input entirely. We should not be aiming for sole AI automation, instead CISOs should be striving for ‘smart automation’ when it comes to their technology strategy.

 

Although AI may feel like an established technology, is in fact still an emerging technology. In theory, it is not a CISO’s role to determine whether an emerging tool is to benefit their organisation - the technology should prove that itself over time. Switching to this mind-set will ensure that security and risk management leaders do not fall foul of the wrong AI strategy.

 

Today, AI does not exist in its full capacity. CISOs need to balance both their resistance to change and their fear of missing out biases to achieve a balanced AI strategy.