AI impersonation is now the hardest cyberattack vector to defend against

Advanced forms of social engineering are on the rise, though obvious gaps like weak passwords are becoming easier to plug.

  • Friday, 25th October 2024 Posted 4 weeks ago in by Phil Alsop

AI impersonation is now the hardest vector for cybersecurity professionals to protect companies against, according to Teleport’s 2024 State of Infrastructure Access Security Report. The study, which surveyed 250 senior US and UK decision-makers, shows that social engineering remains one of the top tactics cybercriminals use to install malware and steal sensitive data, with the advancement of AI and deepfakes further fueling the effectiveness of phishing scams.

When asked to rank the difficulty of each attack vector, AI impersonation was most commonly cited by 52% of respondents as a difficult vector. The findings indicate the changing landscape of social engineering, with threat actors employing more advanced phishing tactics that focus on credentials as a target. Criminals have recently created infamous tools like WormGPT – the hackabot-as-a-service – to design more convincing phishing campaigns or deepfake impersonations.

“The reason AI impersonation from cyber criminals is so difficult to defend against is that it is getting better and better at mimicking legitimate user behavior with high accuracy, making it challenging to distinguish between genuine and malicious access attempts,” says Ev Kontsevoy, CEO at Teleport.

“AI and deepfake tools aren’t just creating phishing campaigns, but significantly lowering the time and cost to launch the campaigns as well to near-zero. They’re so easy today that a kid in Nebraska could sit in his Mom’s basement and launch hundreds of these per day.”

To fight this wave of AI impersonation and other security threats, a broad majority (68%) are already using AI-infused tools to make safeguards more accurate and effective. Industry debate continues to take place on the effectiveness of fighting AI with AI.

“The findings here suggest a risk of overconfidence in AI’s ability to protect organizations against social engineering. Using AI to combat this threat is like suggesting that an adversary targets the missiles on a fighter jet, rather than the fighter jet itself,” says Kontsevoy. “The right conversation is, ‘how do we stop employees and enterprises from making their credentials a threat vector?’ As it stands, credentials are pretty much littered across the many disparate layers of the technology stack – Kubernetes, servers, cloud APIs, specialised dashboards and databases, and more.”

The rise in identity-based attacks was cited by 87% of respondents as an ‘important’ factor that is contributing to making infrastructure access security more challenging over time. Almost 40% of companies, however, still haven’t implemented the use of cryptographically authenticated identities for users. These factors combined likely help explain why social engineering attacks like phishing and smishing still represent the second hardest attack vector to defend against (48%), with compromised privileged credentials and secrets (47%) in third place.

On the opposite end of the difficulty curve, ‘weak’ passwords are now the easiest attack vector to defend against according to respondents: only 36% struggle to protect against this vector, and 45% even say it’s ‘easy’ to do so.

“I think what this shows is that the cybersecurity industry is becoming better at plugging the most obvious gaps and weak points. There has certainly been some regulation, such as in the UK, to clamp down on weak passwords. But passwords are just one of many vectors, and credentials exist in many forms like API keys and browser cookies. There are also standing privileges to worry about, which can lead to breach-and-pivot attacks,” adds Kontsevoy.

“Regardless of whether social engineering attacks use AI or not, the solution is always going to be the same: eliminate human error. That means the modern-day security paradigm has to be first and foremost about eliminating secrets and enforcing cryptographic identity, least privileged access, and robust policy and identity governance. Human behaviour exposing infrastructure to data leaks is what we need to learn to defend against, and enforcing these steps are the key to preventing criminals from wreaking more havoc.

The promise of AI is on every biopharma’s radar, but the reality today is that much of the industry is grappling with how to convert the hype into...
IT teams urged to resolve ‘data delays’ as UK executives struggle to access and use relevant business data.

‘Playtime is over’ for GenAI

Posted 3 days ago by Phil Alsop
NTT DATA research shows organizations shifting from experiments to investments that drive performance.

GenAI not production-ready?

Posted 3 days ago by Phil Alsop
Architectural challenges are holding UK organisations back - with just 24% citing having sufficient governance to implement GenAI.

AI tops decision-makers' priorities

Posted 3 days ago by Phil Alsop
Skillsoft has released its 2024 IT Skills and Salary Report. Based on insights from more than 5,100 global IT decision-makers and professionals, the...

The state of cloud ransomware in 2024

Posted 3 days ago by Phil Alsop
Ransom attacks in the cloud are a perennially popular topic of discussion in the cloud security realm.
Talent and training partner, mthree, which supports major global tech, banking, and business clients to build job-ready teams, has revealed the...

AI innovation is powering the Net Zero transition

Posted 3 days ago by Phil Alsop
Whilst overall AI patent filings have slowed, green AI patent publications grew 35% in 2023.