Ai governance lags behind rapid adoption: a call for responsible deployment

A new report from OpenText highlights gaps in security and governance as enterprises rapidly adopt AI technologies without necessary risk management strategies.

  • Tuesday, 31st March 2026 Posted 1 month ago in by Sophie Milburn
In a newly released global report, OpenText has highlighted concerns surrounding the rapid deployment of AI technologies, particularly Generative AI (GenAI) and Agentic AI. Conducted in collaboration with the Ponemon Institute, the study identifies gaps in security and governance practices across industries. While over half of participating enterprises (52%) have integrated GenAI solutions, many lack the foundational elements needed to manage associated risks.

The report notes that AI maturity extends beyond adoption and requires the integration of security and governance frameworks. Currently, only 20% of enterprises have achieved full AI maturity, defined as the deployment of AI in cybersecurity activities alongside risk assessments. Additionally, 43% of businesses have adopted a risk-based AI management strategy.

The findings also highlight gaps between AI deployment and governance practices, which are intended to support trust and compliance. According to the report, 79% of organisations have not yet reached full AI maturity in cybersecurity, and 41% have implemented AI-specific data privacy policies.

The survey outlines several challenges reported by enterprises:
  • A majority (62%) report difficulties in mitigating model and bias risks, including ethical concerns during language model development.
  • 58% of respondents report challenges in minimising prompt or input risks, such as misleading responses.
  • 56% of participants report challenges in managing user risks, including the potential spread of misinformation.
The report also notes that trust and reliability remain areas of concern, with potential impacts on security effectiveness due to governance gaps. In terms of effectiveness:
  1. 51% of respondents say AI is effective in reducing the time to detect anomalies.
  2. 48% rate AI as effective in threat detection and hunting, with limitations linked to model biases and decision rule errors.
  3. Confidence in AI’s ability to operate autonomously remains limited, with 47% stating their models can independently make sound decisions. The report indicates that human oversight continues to be required due to the pace of threat adaptation.
To support business value from AI, organisations are encouraged to integrate transparency and control from the outset. The development of secure information management systems, governance frameworks, and continuous monitoring is identified as important. Aligning AI with data and security practices is presented as a way to support innovation and operational use of AI, according to OpenText.

AI trust fails to keep pace with rate of adoption

Posted 4 days ago by Phil Alsop
Two thirds of organisations (64 per cent) are actively using artificial intelligence across the UK, a 12 per cent increase from last year according...

AI adoption is accelerating identity sprawl

Posted 4 days ago by Phil Alsop
Keeper Security has released its latest global insight report, “Identity Security at Machine Speed.”

Surge in AI-enabled cybercrime

Posted 5 days ago by Phil Alsop
Fortinet leverages threat intelligence to disrupt global cybercrime, transforming awareness into actionable insights.
Study finds most organizations recognize the need for connected data, content, and workflows, but few have built the operational foundation required...
A third (35%) of European organisations cannot say whether they have been hit by an AI-powered cyberattack, according to the latest AI Pulse Poll...
Nearly half of European organisations spend up to €5 million a year on cloud – yet a quarter of capacity sits idle.

AI-Driven attacks reshape the MSP threat landscape

Posted 1 week ago by Phil Alsop
New research shows session hijacking surging 23%, ransomware up 190%, and non-human identities outnumbering users 25:1 as AI accelerates attacks...
Lenovo research highlights a growing AI execution gap as organizations struggle to control and operate AI across their environments.