97% of AI leaders commit to responsible AI

Domino’s 2024 REVelate survey reveals that despite high awareness of responsible AI's importance, a critical governance gap persists, threatening innovation and compliance across industries.

  • Sunday, 22nd September 2024 Posted 4 months ago in by Phil Alsop

Domino Data Lab has released its 2024 REVelate survey report, which uncovers a troubling disconnect between AI ambitions and the resources required to execute responsible AI governance. The survey, which included responses from 117 AI leaders, reveals that while 97% of organizations have set goals for responsible AI, nearly half (48%) are under-resourced to implement the necessary governance frameworks.

The findings highlight a growing readiness gap in the enterprise AI landscape, where responsible AI is increasingly seen as essential for innovation and compliance, but where resource constraints prevent full execution.

“Despite the growing recognition of responsible AI’s value, many enterprises are ill-equipped to enforce governance, risking financial penalties, reputational damage, and stunted innovation,” said Dr. Kjell Carlsson, head of AI Strategy at Domino Data Lab. “Combine the desire to scale AI to all parts of the business, with increasing regulation at an international, state and even municipal level, and it becomes more important than ever for organizations to govern the development and deployment of AI effectively.”

AI Governance Emerges as a Strategic Priority, but Resources Lag Behind

The survey illustrates that responsible AI is now a top business priority, with 43% of leaders rating it as “extremely critical” to driving business outcomes, outpacing traditional priorities like business intelligence. Nevertheless, resource shortages remain a major obstacle. Despite these efforts, nearly half of survey respondents (48%) continue to cite resource constraints as the biggest barrier to implementing effective AI governance, alongside having insufficient technology to govern AI (38%).

High Stakes: The Costs of Inadequate AI Governance

The risks of failing to properly govern AI are substantial. The survey found that regulatory violations are the top concern for 49% of respondents, with potential fines under regulations like the EU AI Act reaching as high as 7% of global annual revenue. Beyond regulatory concerns, 46% of respondents fear reputational damage and stalled innovation if governance issues are not addressed.

Financial costs also weigh heavily on organizations, with 34% of respondents reporting increased operational expenses due to errors in poorly governed AI systems.

Balancing Innovation and Regulation

While there is broad support for AI regulations, with 71% of AI leaders believing that regulations will ensure the safe use of AI, there is concern that overly strict governance might slow down innovation. Nearly half (44%) of respondents worry that regulations could hamper AI adoption.

The survey also reveals a divide in opinions on the current state of AI governance: 51% of respondents doubt that existing regulatory frameworks are enforceable in their current form, highlighting the ongoing need for better-defined and implementable standards.

The Path Forward: Implementing AI Governance Frameworks

To address the governance challenges, many organizations are prioritizing frameworks that translate responsible AI principles into practice. The survey found that 47% of companies are focused on defining responsible AI principles, while 44% are deploying governance platforms to ensure policies are applied consistently across the AI lifecycle. The practice of forming AI ethics boards ranks significantly lower; only 29% ranked it a top approach to implementing responsible AI. Additionally, logging and auditability (74%), reproducibility (68%), and monitoring (61%) emerged as the most critical capabilities needed to support responsible AI.

Research by Westcon-Comstor reveals strong end-user appetite to invest in CNAPP technologies, creating growth opportunities for channel partners.

VIPRE releases its 2024 Email Threat Analysis

Posted 3 hours ago by Phil Alsop
Annual email threat research predicts infostealer, BEC attacks, and AI-driven phishing and social engineering as persistent threats in 2025,...

Data security concerns surge

Posted 3 hours ago by Phil Alsop
Increasing preferences toward social commerce have highlighted a gap between meeting consumer shopping expectations and maintaining data security.

Agentic AI is driving investment

Posted 1 day ago by Phil Alsop
90% of U.S. IT executives say they have business processes that would be improved by agentic AI.
52% of IT leaders say enhancing cybersecurity measures is the top priority in 2025.
Agilitas releases its latest Channel Trends Report “Sustainability - An Urgent Imperative”.
Global survey of executives reveals 80% face pressure to reduce the cost of security while improving their organization’s security posture.
Softcat has released its annual Business Tech Report. Drawing on responses from 3,870 organisations across 30 sectors, both public and corporate,...