CISOs confident about data privacy and security risks of generative AI

Over half of CISOs believe generative AI is a force for good and a security enabler, whereas only 25% think it presents a risk to their organisational security.

  • Friday, 17th May 2024 Posted 1 year ago in by Phil Alsop

New data from the latest members’ survey of the ClubCISO community, in collaboration with Telstra Purple, highlight CISOs’ confidence in generative AI in their organisations. Around half of those surveyed (51%), and the largest contingent, 50%) believe these tools are a force for good and act as security enablers. In comparison, only 25% saw generative AI tools as a risk to their organisational security.

The study's findings underscore the proactive stance of CISOs in comprehending the risks linked to generative AI tools and their active support in implementing these tools across their respective organisations.

45% of respondents suggested they now allow generative AI tools for specific applications, with the CISO office making a final decision on their use. Only a quarter (23%) also have region-specific or function-specific rules to govern generative AI use. The findings represent a marked change from when generative AI applications first landed following the launch of ChatGPT and when data privacy and security concerns were top-of-mind risks for organisations.

Despite ongoing concerns around the data privacy of specific applications, 54% of CISOs are confident they know how AI tools will use or share the data fed to them, and 41% have a policy to cover AI and its usage. In contrast, only a minority (9%) of CISOs say they do not have a policy governing the use of AI tools and have not set out a direction either way.

Inspiring further confidence, 57% of CISOs also believe that their staff are aware and mindful of the data protection and intellectual property implications of using AI tools.

Commenting on the findings, Rob Robinson, Head of Telstra Purple EMEA, sponsors of the ClubCISO community, said, “While we do still hear examples of proprietary data being fed to AI tools and then that same data being resurfaced outside of an organisation’s boundaries, what our members are telling us is that this is a known risk, not just in their teams, but across the employee population too.”

He continued, “Generative AI is rightly being seen for the opportunity it will unlock for organisations. Its disruptive force is being unleashed across sectors and functions, and rather than slowing the pace of adoption, our survey highlights that CISOs have taken the time to understand and educate their organisations about the risks associated with using such tools. It marks a break away from the traditional views of security acting as a blocker for innovation.”

Technical debt stifling path to AI adoption

Posted 23 hours ago by Phil Alsop
Outdated legacy technologies costing organizations the ability to innovate, money, time – and, potentially, even customers.
Data from ‘Unlocking Growth in the Mid-Market: The Node4 Report’, reveals UK mid-market leaders are taking a more pragmatic approach to public...
According to research unveiled today, one in five CIOs and CTOs at enterprise companies (21%) believe that their organisations’ road to digital...
76% of financial services firms surveyed plan to implement agentic AI within the next year.

Why most businesses aren’t yet winning with AI

Posted 4 days ago by Phil Alsop
71% of business leaders say their workforces are not ready to successfully leverage AI.
Five9 has released its 2025 Business Leaders Customer Experience Report offering analysis of CX trends shaping how global business leaders create...
Delinea has unveiled new research highlighting how ransomware attacks have continued to surge over the past year, despite fewer victims paying. Over...
96% of tech professionals view AI agents as a growing security risk, yet 98% of organisations plan to expand adoption.