AI impersonation is now the hardest cyberattack vector to defend against

Advanced forms of social engineering are on the rise, though obvious gaps like weak passwords are becoming easier to plug.

  • Friday, 25th October 2024 Posted 1 year ago in by Phil Alsop

AI impersonation is now the hardest vector for cybersecurity professionals to protect companies against, according to Teleport’s 2024 State of Infrastructure Access Security Report. The study, which surveyed 250 senior US and UK decision-makers, shows that social engineering remains one of the top tactics cybercriminals use to install malware and steal sensitive data, with the advancement of AI and deepfakes further fueling the effectiveness of phishing scams.

When asked to rank the difficulty of each attack vector, AI impersonation was most commonly cited by 52% of respondents as a difficult vector. The findings indicate the changing landscape of social engineering, with threat actors employing more advanced phishing tactics that focus on credentials as a target. Criminals have recently created infamous tools like WormGPT – the hackabot-as-a-service – to design more convincing phishing campaigns or deepfake impersonations.

“The reason AI impersonation from cyber criminals is so difficult to defend against is that it is getting better and better at mimicking legitimate user behavior with high accuracy, making it challenging to distinguish between genuine and malicious access attempts,” says Ev Kontsevoy, CEO at Teleport.

“AI and deepfake tools aren’t just creating phishing campaigns, but significantly lowering the time and cost to launch the campaigns as well to near-zero. They’re so easy today that a kid in Nebraska could sit in his Mom’s basement and launch hundreds of these per day.”

To fight this wave of AI impersonation and other security threats, a broad majority (68%) are already using AI-infused tools to make safeguards more accurate and effective. Industry debate continues to take place on the effectiveness of fighting AI with AI.

“The findings here suggest a risk of overconfidence in AI’s ability to protect organizations against social engineering. Using AI to combat this threat is like suggesting that an adversary targets the missiles on a fighter jet, rather than the fighter jet itself,” says Kontsevoy. “The right conversation is, ‘how do we stop employees and enterprises from making their credentials a threat vector?’ As it stands, credentials are pretty much littered across the many disparate layers of the technology stack – Kubernetes, servers, cloud APIs, specialised dashboards and databases, and more.”

The rise in identity-based attacks was cited by 87% of respondents as an ‘important’ factor that is contributing to making infrastructure access security more challenging over time. Almost 40% of companies, however, still haven’t implemented the use of cryptographically authenticated identities for users. These factors combined likely help explain why social engineering attacks like phishing and smishing still represent the second hardest attack vector to defend against (48%), with compromised privileged credentials and secrets (47%) in third place.

On the opposite end of the difficulty curve, ‘weak’ passwords are now the easiest attack vector to defend against according to respondents: only 36% struggle to protect against this vector, and 45% even say it’s ‘easy’ to do so.

“I think what this shows is that the cybersecurity industry is becoming better at plugging the most obvious gaps and weak points. There has certainly been some regulation, such as in the UK, to clamp down on weak passwords. But passwords are just one of many vectors, and credentials exist in many forms like API keys and browser cookies. There are also standing privileges to worry about, which can lead to breach-and-pivot attacks,” adds Kontsevoy.

“Regardless of whether social engineering attacks use AI or not, the solution is always going to be the same: eliminate human error. That means the modern-day security paradigm has to be first and foremost about eliminating secrets and enforcing cryptographic identity, least privileged access, and robust policy and identity governance. Human behaviour exposing infrastructure to data leaks is what we need to learn to defend against, and enforcing these steps are the key to preventing criminals from wreaking more havoc.

Unlocking AI revenue: the pricing paradox

Posted 15 hours ago by Sophie Milburn
UK businesses focus on AI pricing strategies, but struggle with outdated billing systems.
Data resilience is increasingly viewed as important for organisations operating in the AI era, as executive concern over outages continues to grow.

F5 introduces new resources for AI security risk assessment

Posted 17 hours ago by Sophie Milburn
F5 has introduced new threat intelligence resources designed to support assessment of AI model security. Monthly updated leaderboards provide...
Distology has partnered with Snyk to bring AI security tools to its partner network across Northern Europe, supporting demand for application...

UK's small business leaders embrace AI for growth

Posted 3 days ago by Sophie Milburn
A survey reveals that ambitious small business owners in the UK view AI as pivotal for their growth strategies.
As UK IT leaders face increasing pressure from complex AI-driven infrastructure, many plan to enhance observability spend and consolidate tools for...
Red Hat and Google Cloud have expanded their collaboration, introducing Red Hat OpenShift in the Google Cloud Console to support application...

Keepit report reveals state of SaaS recovery readiness

Posted 3 days ago by Sophie Milburn
The Keepit Annual Data Report 2026 shows varying disaster recovery maturity across organisations and highlights the importance of structured testing.