Fraud attacks are scaling and anti-fraud detection needs to do the same

By Danny Kadyshevitch, Senior Product Lead, Detection and Response, Transmit Security.

  • Friday, 17th May 2024 Posted 6 months ago in by Phil Alsop

In May, fraudsters used a fake WhatsApp account, some video footage and voice recording to impersonate Mark Read, CEO of the world’s largest advertising firm, WPP. The fraudsters unsuccessfully tried to compel colleagues to discuss a new business venture and in so doing, purloin sensitive information and even steal funds.

This should tell us something important: It’s boom time for fraudsters. New developments in technology have allowed them to scale their attacks to sizes and levels of complexity that would have been impossible for many only a year ago. New services and open source tools are permitting a massive lowering of the barrier to entry and offering new opportunities.

The unfortunate reality is that although legitimate actors are able to access these technologies, many are still stuck using legacy fraud engines which AI-enabled tactics find easy to circumvent. As a result, many companies are facing fraud attacks which they’re unable to effectively counter, or even detect.

Early last year, PayPal experienced 35,000 simultaneous account takeover attacks in one instance. It’s exactly this kind of attack that fraudsters are increasingly using and that organisations are having more difficulty facing down. They may be able to spot many - even all - of those attacks individually but miss the fact that there is a larger campaign at work. Yet that is the mounting reality of fraud: Complex, multi-faceted and large attacks which can circumvent their fraud detection and prevention systems.

What changed in fraud?

So how did fraudsters get their hands on these new capabilities that allow them to do things they could never attempt only a few years ago? Well, generative AI has something to do with it. As the AI revolution has rolled out, fraudsters have found new ways to use a growing set of applications and models to their own nefarious ends. These can be legitimate public facing applications such as ChatGPT, or other purpose built AI-models designed to help people carry out malicious attacks - such as FraudGPT.

Perhaps more importantly, publicly available frameworks have also allowed them to automate their attacks and scale to them heretofore unseen sizes. Automation effectively allows them to do repeated and menial attacks quicker, allowing fraudsters to massively scale their attacks and become more creative in the way they choose to confuse, distract and compromise their targets.

Those scaled attacks can be launched to a variety of effects. They can be merely an attempt to cast a wide net and ensure that as many attempts make it through detection systems as possible. They can also be a part of a wider campaign. For example, attackers can use generative AI to generate malicious traffic for an evasion attack which deploys malicious traffic in order to confuse intrusion detection systems and lay the ground for a larger attack.

Why legacy fraud detection fails

Legacy fraud detection tools have served us very well for a long time. Using rules and signatures, they can detect and spot a whole collection of threats but the problem comes

when attacks become more complex or creative - which new technologies are increasingly permitting fraudsters to do.

Take signatures, for example. Many anti-virus engines work off a database of unique signatures associated with an individual threat that have been discovered and recorded within that database. They struggle when they face new threats - unrecorded in that database - and tactics. In fact, malware authors have gotten very good at hiding their telltale signs and thus circumventing the sight of antivirus engines and other similar protection measures. They’ve started to employ tactics like steganography to obscure signatures, employed fileless tactics and merely produced more kinds of malware which have not yet been recorded in the database on which the protection measures rely.

Rules suffer from a similar problem. These govern how an underlying detection system treats abnormal or suspicious behaviour. These are often based around simple if/then rules. This could be something “if you spot x transactions of over x amount in x amount of time, then block those transactions.” Those have turned out to be remarkably brittle to changes in the fraud landscape. Furthermore, they don’t account for anything much more creative than someone simply trying to steal money directly out of an account. The problem is that even individual transactions can be a small part of a larger picture. These could be attempts to probe a detection system and in turn, figure out the rules by deducting them from failed attempts. Similarly, they could be a distraction - an attempt to distract defenders from a larger attack happening elsewhere. It could also just be part of a much larger fraud campaign which overwhelms defence. After all, a rule-based detection system might be able to spot a few risky transactions, but it will struggle when asked to spot tens of thousands.

Fraud has evolved, but many legitimate organisations haven’t adapted. So what should an organisation do in the face of threats that are evolving far past their ability to police them? Well many might try and manually adapt. This can involve adding more rules and signatures to the database - but this quickly becomes expensive, time-consuming and complex. As those rules mount and become unmanageable, it can also bring about a whole series of conflicts which ultimately degrade the performance of detection systems. Furthermore, it misses the very point about the problem with these rules - that they can’t keep up with the ever changing nature of fraud tactics.

So what then?

In some sense, the problem is that these legacy fraud detection tools are very good at seeing the individual, clear cases of attack in real time - they have a much harder time seeing the crucial context that can detect a larger attack, which are increasingly characteristic of this new generation of fraud. So how can organisations see that context and gain the required proactivity?

They may already have the necessary resources in their historical fraud data. By applying machine learning algorithms to that data, organisations can attain new insights into the telltale signs and behaviors of an attack.

It will come as no surprise that just as new technologies are becoming a game changer for fraudsters, they’re going to help legitimate organisations too. Machine learning algorithms will effectively help them to integrate risk signals from a wide range of sources - such as their

historical fraud data - so an organisation can better understand novel attack patterns and heretofore unseen fraud tactics.

Fraudsters now know how to out game the rules of legacy fraud detection, and they’re now operating at a level that many organisations don’t see or even expect. New threats require evolution and thankfully, the resources to adapt to this new threat landscape may be closer than many expect.

By Kashif Nazir, Technical Manager at Cloudhouse.
By Terry Storrar, Managing Director at Leaseweb UK.
By Manuel Sanchez, Information Security and Compliance Specialist, iManage.
By Peter Hayles, Product Marketing Manager at Western Digital.
By Richard Eglon, CMO, Nebula Global Services.
Anita Mavridis, VP of Product at Zivver, and Sue Musumeci, Director of Quality & Clinical Informatics at Chronic Care Staffing, explore practical...
By Graham Jarvis, Freelance Business and Technology Journalist, Lead Journalist – Business and Technology, Trudy Darwin Communications.
By Krishna Sai, Senior VP of Technology and Engineering.