AI impersonation is now the hardest cyberattack vector to defend against

Advanced forms of social engineering are on the rise, though obvious gaps like weak passwords are becoming easier to plug.

  • Friday, 25th October 2024 Posted 2 months ago in by Phil Alsop

AI impersonation is now the hardest vector for cybersecurity professionals to protect companies against, according to Teleport’s 2024 State of Infrastructure Access Security Report. The study, which surveyed 250 senior US and UK decision-makers, shows that social engineering remains one of the top tactics cybercriminals use to install malware and steal sensitive data, with the advancement of AI and deepfakes further fueling the effectiveness of phishing scams.

When asked to rank the difficulty of each attack vector, AI impersonation was most commonly cited by 52% of respondents as a difficult vector. The findings indicate the changing landscape of social engineering, with threat actors employing more advanced phishing tactics that focus on credentials as a target. Criminals have recently created infamous tools like WormGPT – the hackabot-as-a-service – to design more convincing phishing campaigns or deepfake impersonations.

“The reason AI impersonation from cyber criminals is so difficult to defend against is that it is getting better and better at mimicking legitimate user behavior with high accuracy, making it challenging to distinguish between genuine and malicious access attempts,” says Ev Kontsevoy, CEO at Teleport.

“AI and deepfake tools aren’t just creating phishing campaigns, but significantly lowering the time and cost to launch the campaigns as well to near-zero. They’re so easy today that a kid in Nebraska could sit in his Mom’s basement and launch hundreds of these per day.”

To fight this wave of AI impersonation and other security threats, a broad majority (68%) are already using AI-infused tools to make safeguards more accurate and effective. Industry debate continues to take place on the effectiveness of fighting AI with AI.

“The findings here suggest a risk of overconfidence in AI’s ability to protect organizations against social engineering. Using AI to combat this threat is like suggesting that an adversary targets the missiles on a fighter jet, rather than the fighter jet itself,” says Kontsevoy. “The right conversation is, ‘how do we stop employees and enterprises from making their credentials a threat vector?’ As it stands, credentials are pretty much littered across the many disparate layers of the technology stack – Kubernetes, servers, cloud APIs, specialised dashboards and databases, and more.”

The rise in identity-based attacks was cited by 87% of respondents as an ‘important’ factor that is contributing to making infrastructure access security more challenging over time. Almost 40% of companies, however, still haven’t implemented the use of cryptographically authenticated identities for users. These factors combined likely help explain why social engineering attacks like phishing and smishing still represent the second hardest attack vector to defend against (48%), with compromised privileged credentials and secrets (47%) in third place.

On the opposite end of the difficulty curve, ‘weak’ passwords are now the easiest attack vector to defend against according to respondents: only 36% struggle to protect against this vector, and 45% even say it’s ‘easy’ to do so.

“I think what this shows is that the cybersecurity industry is becoming better at plugging the most obvious gaps and weak points. There has certainly been some regulation, such as in the UK, to clamp down on weak passwords. But passwords are just one of many vectors, and credentials exist in many forms like API keys and browser cookies. There are also standing privileges to worry about, which can lead to breach-and-pivot attacks,” adds Kontsevoy.

“Regardless of whether social engineering attacks use AI or not, the solution is always going to be the same: eliminate human error. That means the modern-day security paradigm has to be first and foremost about eliminating secrets and enforcing cryptographic identity, least privileged access, and robust policy and identity governance. Human behaviour exposing infrastructure to data leaks is what we need to learn to defend against, and enforcing these steps are the key to preventing criminals from wreaking more havoc.

Innovative and intelligent AI solutions will empower teams with fast and accurate information, increasing efficiency and driving revenue growth.
Vision statement outlines WBA goals for 6G, and recommendations to ensure the 6G opportunity resonates beyond the technical community, appealing to...
Almost half of executives believe that their use of Gen AI has driven a rise in Greenhouse Gas (GHG) emissions, and 42% have had to relook at their...
Technology consulting is forecast to grow globally by 7% to US$421bn this year, as most technology buyers (79%) expect to use more consulting...
Oxylabs experts predict AI-driven web scraping, multi-agent systems, and evolving regulations will reshape industries and drive automation in 2025.
The Wireless Broadband Alliance (WBA), the global industry body dedicated to driving the seamless and interoperable service experience of Wi-Fi...
Annual usage data from O’Reilly’s online learning platform provides business leaders with the top tech trends and tools poised to shape business...
UK businesses face 23 digital incidents on average per year but roadblocks are preventing proper triage and prevention measures.