AI use is outpacing policy and governance, ISACA finds

The majority (83%) of IT and business professionals in Europe believe employees in their organisation are using AI.

  • Monday, 30th June 2025 Posted 10 months ago in by Phil Alsop

Nearly three out of four European IT and cybersecurity professionals say staff are already using generative AI at work – up ten points in a year – but just under a third of organisations have put formal policies in place, according to new ISACA research. 

 

It’s clear that the use of AI is becoming more prevalent within the workplace, and so regulating its use is best practice. Yet not even a third (31%) of organisations have a formal, comprehensive AI policy in place, highlighting a disparity between how often AI is used versus how closely it’s regulated in workplaces. 

 

Policies work twofold to enhance activity and protect businesses 

 

AI is already making a positive impact day to day – for example, over half (56%) of respondents say it has boosted organisational productivity, and 71% report efficiency gains and time savings. Looking ahead, 62% are optimistic that AI will positively impact their organisation in the next year 

 

Yet that same speed and scale make the technology a magnet for bad actors. Almost two-thirds (63%) are extremely or very concerned that generative AI could be turned against them, while 71% expect deepfakes to grow sharper and more widespread in the year ahead. Despite that, only 18% of organisations are putting money into deepfake-detection tools—a significant security gap. This disconnect between rising awareness and lack of organisational investment leaves businesses exposed at a time when AI-powered threats are evolving fast. 

 

AI has significant promise, but without clear policies and training to mitigate risks it becomes a potential liability. Robust, role-specific guidelines—covering everything from “when to use AI” to “how to spot a deepfake” are needed to help businesses safely harness AI’s potential.  

 

“The UK Government has made it clear through its AI Opportunities Action Plan that responsible AI adoption is a national priority,” says Chris Dimitriadis, ISACA’s Chief Global Strategy Officer. “But while awareness is growing, many organisations are still falling behind when it comes to taking action and adopting it. At the same time AI threats are evolving fast, from deepfakes to phishing, and without adequate training, investment and internal policy, businesses will struggle to keep up. Bridging this risk-action gap is essential if the UK is to lead with innovation and digital trust.” 

 

Education is the way to get the best from AI 

 

But policies are only as effective as the people who understand - and can confidently put them into practice. 

 

As emerging technologies like AI continue to evolve, there is a need to upskill and gain new qualifications - 42% believe that they will need to increase their skills and knowledge in AI within the next six months in order to retain their job or advance their career - an increase of 8% from just last year. Most (89%) recognise that this will be needed within the next two years. 

 

Dimitriadis added: “Without guidance, rules or training in place on the extent to which AI can be used at work, employees might continue to use it in the wrong context or in an unsafe way. Equally, they might not be able to spot misinformation or deepfakes as easily as they might if they were equipped with the right knowledge and tools. Technology is evolving, and bad actors are continuously keeping pace with changes to weaponise them, carrying out more sophisticated and advanced attacks. 

 

“That’s why upskilling can’t wait. AI training must be prioritised and properly budgeted for, and at the same time, workplaces must work to embed formal and comprehensive policies that are understood by all as employees continue to experiment with AI in their day-to-day. With more skilled employees, businesses will have a workforce with a better understanding of best practice. These employees are more likely to champion the embedding of policies, making sure regulations are adhered to and upheld to a good standard.” 

63% report operational downtime while manual IT/OT coordination continues to slow response.

AI trust fails to keep pace with rate of adoption

Posted 6 days ago by Phil Alsop
Two thirds of organisations (64 per cent) are actively using artificial intelligence across the UK, a 12 per cent increase from last year according...

AI adoption is accelerating identity sprawl

Posted 6 days ago by Phil Alsop
Keeper Security has released its latest global insight report, “Identity Security at Machine Speed.”

Surge in AI-enabled cybercrime

Posted 6 days ago by Phil Alsop
Fortinet leverages threat intelligence to disrupt global cybercrime, transforming awareness into actionable insights.
Study finds most organizations recognize the need for connected data, content, and workflows, but few have built the operational foundation required...
A third (35%) of European organisations cannot say whether they have been hit by an AI-powered cyberattack, according to the latest AI Pulse Poll...
Nearly half of European organisations spend up to €5 million a year on cloud – yet a quarter of capacity sits idle.

AI-Driven attacks reshape the MSP threat landscape

Posted 1 week ago by Phil Alsop
New research shows session hijacking surging 23%, ransomware up 190%, and non-human identities outnumbering users 25:1 as AI accelerates attacks...