AI use is outpacing policy and governance, ISACA finds

The majority (83%) of IT and business professionals in Europe believe employees in their organisation are using AI.

  • Monday, 30th June 2025 Posted 8 months ago in by Phil Alsop

Nearly three out of four European IT and cybersecurity professionals say staff are already using generative AI at work – up ten points in a year – but just under a third of organisations have put formal policies in place, according to new ISACA research. 

 

It’s clear that the use of AI is becoming more prevalent within the workplace, and so regulating its use is best practice. Yet not even a third (31%) of organisations have a formal, comprehensive AI policy in place, highlighting a disparity between how often AI is used versus how closely it’s regulated in workplaces. 

 

Policies work twofold to enhance activity and protect businesses 

 

AI is already making a positive impact day to day – for example, over half (56%) of respondents say it has boosted organisational productivity, and 71% report efficiency gains and time savings. Looking ahead, 62% are optimistic that AI will positively impact their organisation in the next year 

 

Yet that same speed and scale make the technology a magnet for bad actors. Almost two-thirds (63%) are extremely or very concerned that generative AI could be turned against them, while 71% expect deepfakes to grow sharper and more widespread in the year ahead. Despite that, only 18% of organisations are putting money into deepfake-detection tools—a significant security gap. This disconnect between rising awareness and lack of organisational investment leaves businesses exposed at a time when AI-powered threats are evolving fast. 

 

AI has significant promise, but without clear policies and training to mitigate risks it becomes a potential liability. Robust, role-specific guidelines—covering everything from “when to use AI” to “how to spot a deepfake” are needed to help businesses safely harness AI’s potential.  

 

“The UK Government has made it clear through its AI Opportunities Action Plan that responsible AI adoption is a national priority,” says Chris Dimitriadis, ISACA’s Chief Global Strategy Officer. “But while awareness is growing, many organisations are still falling behind when it comes to taking action and adopting it. At the same time AI threats are evolving fast, from deepfakes to phishing, and without adequate training, investment and internal policy, businesses will struggle to keep up. Bridging this risk-action gap is essential if the UK is to lead with innovation and digital trust.” 

 

Education is the way to get the best from AI 

 

But policies are only as effective as the people who understand - and can confidently put them into practice. 

 

As emerging technologies like AI continue to evolve, there is a need to upskill and gain new qualifications - 42% believe that they will need to increase their skills and knowledge in AI within the next six months in order to retain their job or advance their career - an increase of 8% from just last year. Most (89%) recognise that this will be needed within the next two years. 

 

Dimitriadis added: “Without guidance, rules or training in place on the extent to which AI can be used at work, employees might continue to use it in the wrong context or in an unsafe way. Equally, they might not be able to spot misinformation or deepfakes as easily as they might if they were equipped with the right knowledge and tools. Technology is evolving, and bad actors are continuously keeping pace with changes to weaponise them, carrying out more sophisticated and advanced attacks. 

 

“That’s why upskilling can’t wait. AI training must be prioritised and properly budgeted for, and at the same time, workplaces must work to embed formal and comprehensive policies that are understood by all as employees continue to experiment with AI in their day-to-day. With more skilled employees, businesses will have a workforce with a better understanding of best practice. These employees are more likely to champion the embedding of policies, making sure regulations are adhered to and upheld to a good standard.” 

Barracuda Networks has announced upgrades to its cybersecurity solutions and partner programs, aiming to strengthen resilience across email, network...
Sectigo reveals multi-tenant partner platform, aiming for seamless, automated certificate management for channel partners.

AI and Cybersecurity: the future of phishing defence

Posted 1 day ago by Sophie Milburn
2025 marked a turning point in cybersecurity, as AI transformed both phishing techniques and the tools used to combat them, ushering in a more...
NinjaOne introduces a real-time AI-powered vulnerability management solution that helps IT teams identify and fix security issues more efficiently.

ANS secures Microsoft Frontier partner designation

Posted 1 day ago by Sophie Milburn
ANS enhances its standing with dual Microsoft designations, focusing on AI realisation and comprehensive support services.
Polarise and vCluster Labs partner to provide European mid-market enterprises with AI infrastructure that maintains data control and compliance.
Arctic Wolf launches an AI-driven SOC designed to streamline security operations and support more efficient, automated workflows.
At Gamma’s Birmingham Regional Forum 2026, partners explored AI, CX, and portfolio updates focused on service delivery and growth.