How can companies leverage machine learning to mitigate cyber threats?

By Tom Ashoff, SVP of Engineering, ThreatQuotient.

  • Monday, 31st July 2023 Posted 1 year ago in by Phil Alsop

Cybersecurity has become one most crucial aspects of many organisations due to the speed at which cyber threats evolve. The “speed of cybersecurity” makes it vital to have timely and agile defence measures to detect, analyse, and mitigate cyber risks – as it is the only way to stay ahead of attackers and protect assets in an increasingly dynamic and interconnected world. 

 

New technologies like cloud computing and automation have led to transformative changes in cybersecurity, though these changes weren’t immediate. The use of the cloud within other IT teams advanced much faster than it did in cybersecurity departments, as security teams were hesitant to cede control to technologies in the hands of others.  

 

Similarly, while automation flourished in general IT applications (e.g. Puppet, Chef, Ansible) and business units like marketing (e.g., Pardot, Eloqua, Marketo), adoption lagged within the more risk-averse cybersecurity teams as they needed to get comfortable with both the risks and rewards prior to deployment. 

  

What does AI bring to cybersecurity? 

The same measured approach makes even more sense while facing a new technology like Artificial Intelligence (AI), which is bigger in scale than cloud or automation. AI holds tremendous promise for companies in bolstering defence mechanisms, detecting threats, and enabling faster incident response.  

 

While the recent interest in AI has focused on generative technology, it is not the only aspect of AI to consider. AI technologies can be applied to security operations in various ways, providing businesses and organisations with unique capabilities. Natural Language Processing (NLP), Machine Learning (ML) and Generative AI are among the three critical pillars of AI technologies to offer tremendous support for organisations’ security operations.  

 

·        Natural Language Processing (NLP) focuses on the interaction between computers and human language. It involves analysing and understanding language to enable machines to comprehend and respond to human-created text. This is critical when parsing unstructured data in many forms – reports, emails, RSS feeds, etc.  

 

There are various ways organisations can leverage NLP to streamline a variety of security operations. For example, companies can identify and extract Threat Intelligence such as IOCs, malware, adversaries, and tags from unstructured data, using named entity recognition & keyword matching. 

 

It can also be used to extract meaning and context from unstructured text in data feed sources and finished intelligence reports and parse any reports, events, or PDFs. This way, it saves analysts time by removing the manual steps they undertake today reading and extracting data from reports, freeing them to be more proactive when addressing the risk within their environment. 

 

·        Machine Learning (ML) enables computers to learn from data and make predictions or take actions without being explicitly programmed. It involves algorithms that iteratively improve their performance based on training examples. 

 

Organisations leverage ML to make sense of data from disparate sources in order to accelerate detection, investigation, and response. This process generally starts with getting data in different formats and languages from different vendors and systems to work together. It can also help incorporate results from automation for learning, and further action can be initiated. The more data and context, the more the machine learning engine learns and improves. The engine focuses on correlation and prioritisation, which ultimately aids in getting the right data to the right systems and teams at the right time to make security operations more data-driven, efficient and effective. 

 

·        Generative AI focuses on creating systems capable of generating original content, such as profiles, reports or security automation. It uses deep learning to learn from data to produce realistic outputs resembling human creativity. Generative AI is the most recent form of AI that’s captured everyone’s attention.  

 

There are several tools already out there today such as ChatGPT, Google Bard, and Microsoft Bing Chat. However, as this technology evolves, the possibilities seem endless. Generative models can learn patterns from existing malware samples and generate new ones, aiding in identifying and detecting malicious software. They can develop adversarial examples, helping in testing and strengthening the resilience of organisations against attacks. They can generate automated responses or actions based on identified threats or attack patterns, enabling faster and more effective incident response.  

 

Responsible collaboration is vital in enhancing security operations 

As cybersecurity continues to grapple with the ever-accelerating evolution of attacks – which themselves will begin to use AI and machine learning techniques – embracing transformative technologies becomes crucial. It is impossible to overlook the great promise and the enormous potential of Generative AI and other AI implications for organisations’ security operations to assist in enrichment, automation, and remediation.  

  

While the promises are significant, we must understand the risks and rewards before they can be widely adopted. It is essential to approach the implementation of any AI technologies responsibly and understand the rewards and risks it brings. Companies should take measured approaches, focus on specific use cases, choose the parts they plan to rely on carefully, and be open to learning and improving over time.