Research explores consumer susceptibility to ChatGPT scams

Survey underscores the importance of vigilance against AI-powered phishing, app and password threats.

  • Wednesday, 13th September 2023 Posted 1 year ago in by Phil Alsop

Beyond Identity has released the findings of its new industry research on the diverse methods hackers are employing to breach systems, steal sensitive information and automate complex processes with the help of generative AI technology.

 

The survey of 1,000+ Americans demonstrates exactly how convincing ChatGPT scams can be, and how consumers and businesses can protect themselves from falling victim to fraudulent messages, unsafe applications and password theft. The respondents were asked to review different schemes and express whether they’d be susceptible—and if not, to identify the factors that aroused suspicion. Notably, 39% said they would fall victim to at least one of the phishing messages, 49% would be tricked into downloading a fake ChatGPT app and 13% have used AI to generate passwords.

 

As part of the survey, ChatGPT drafted phishing emails, texts and posts and respondents were asked to identify which were believable. Of the 39% that said they would fall victim to at least one of the options, the social media post scam (21%) and text message scam (15%) were most common. For those wary of all the messages, the top giveaways were suspicious links, strange requests and unusual amounts of money being requested.

 

“With adversaries using AI, the level of difficulty for attackers will be markedly reduced. While writing well-crafted phishing emails is a first step, we fully expect hackers to use AI across all phases of the cybersecurity kill chain,” said Jasson Casey, CTO of Beyond Identity. “Organisations building apps for their customers or protecting the internal systems used by their workforce and partners will need to take proactive, concrete measures to protect data—such as implementing passwordless, phish-resistant multi-factor authentication (MFA), modern Endpoint Detection and Response (EDR) software and zero trust principles.”

 

Although 93% of respondents had not experienced having their information stolen from an unsafe app in real life, 49% were fooled when trying to identify the real ChatGPT app out of six real but copycat options. Interestingly, those who had fallen victim to app fraud in the past were much more likely to do so again.

 

The survey also explored how ChatGPT can be leveraged by hackers for social engineering purposes. For instance, ChatGPT can use easy-to-find personal information to generate lists of probable passwords to attempt to breach accounts. This is a problem for the one in four respondents who use personal information in their passwords, like birth dates (35%) or pet names (34%) that can be readily found on social media, business profiles and phone listings. While longer passwords with random characters and no personal information may seem like the best way to combat this malicious AI capability, the report is clear: any and all passwords are a critical vulnerability for organisations since bad actors will find other, easier ways into accounts – making passwordless and phish-resistant MFA an absolute necessity.