Nearly half of security professionals agree GenAI is their biggest security risk

HackerOne has revealed data that found 48% of security professionals believe AI is the most significant security risk to their organization.

  • Sunday, 22nd September 2024 Posted 1 month ago in by Phil Alsop

Ahead of the launch of its annual Hacker-Powered Security Report, HackerOne revealed early findings, which include data from a survey of 500 security professionals. When it comes to AI, respondents were most concerned with the leaking of training data (35%), unauthorized usage of AI within their organizations (33%), and the hacking of AI models by outsiders (32%).

When asked about handling the challenges that AI safety and security issues present, 68% said that an external and unbiased review of AI implementations is the most effective way to identify AI safety and security issues. AI red teaming offers this type of external review through the global security researcher community, who help to safeguard AI models from risks, biases, malicious exploits, and harmful outputs.

“While we’re still reaching industry consensus around AI security and safety best practices, there are some clear tactics where organizations have found success,” said Michiel Prins, co-founder at HackerOne. “Anthropic, Adobe, Snap, and other leading organizations all trust the global security researcher community to give expert third-party perspective on their AI deployments.”

Further research from a HackerOne-sponsored SANS Institute report explored the impact of AI on cybersecurity and found that over half (58%) of respondents predict AI may contribute to an “arms race” between the tactics and techniques used by security teams and cybercriminals. The research also found optimism around the use of AI for security team productivity, with 71% reporting satisfaction from implementing AI to automate tedious tasks. However, respondents believed AI productivity gains have benefited adversaries and were most concerned with AI-powered phishing campaigns (79%) and automated vulnerability exploitation (74%).

“Security teams must find the best applications for AI to keep up with adversaries while also considering its existing limitations — or risk creating more work for themselves,” said Matt Bromiley, Analyst at The SANS Institute. “Our research suggests AI should be viewed as an enabler, rather than a threat to jobs. Automating routine tasks empowers security teams to focus on more strategic activities.”

The promise of AI is on every biopharma’s radar, but the reality today is that much of the industry is grappling with how to convert the hype into...
IT teams urged to resolve ‘data delays’ as UK executives struggle to access and use relevant business data.

‘Playtime is over’ for GenAI

Posted 3 days ago by Phil Alsop
NTT DATA research shows organizations shifting from experiments to investments that drive performance.

GenAI not production-ready?

Posted 3 days ago by Phil Alsop
Architectural challenges are holding UK organisations back - with just 24% citing having sufficient governance to implement GenAI.

AI tops decision-makers' priorities

Posted 3 days ago by Phil Alsop
Skillsoft has released its 2024 IT Skills and Salary Report. Based on insights from more than 5,100 global IT decision-makers and professionals, the...

The state of cloud ransomware in 2024

Posted 3 days ago by Phil Alsop
Ransom attacks in the cloud are a perennially popular topic of discussion in the cloud security realm.
Talent and training partner, mthree, which supports major global tech, banking, and business clients to build job-ready teams, has revealed the...

AI innovation is powering the Net Zero transition

Posted 3 days ago by Phil Alsop
Whilst overall AI patent filings have slowed, green AI patent publications grew 35% in 2023.