97% of AI leaders commit to responsible AI

Domino’s 2024 REVelate survey reveals that despite high awareness of responsible AI's importance, a critical governance gap persists, threatening innovation and compliance across industries.

  • Sunday, 22nd September 2024 Posted 3 months ago in by Phil Alsop

Domino Data Lab has released its 2024 REVelate survey report, which uncovers a troubling disconnect between AI ambitions and the resources required to execute responsible AI governance. The survey, which included responses from 117 AI leaders, reveals that while 97% of organizations have set goals for responsible AI, nearly half (48%) are under-resourced to implement the necessary governance frameworks.

The findings highlight a growing readiness gap in the enterprise AI landscape, where responsible AI is increasingly seen as essential for innovation and compliance, but where resource constraints prevent full execution.

“Despite the growing recognition of responsible AI’s value, many enterprises are ill-equipped to enforce governance, risking financial penalties, reputational damage, and stunted innovation,” said Dr. Kjell Carlsson, head of AI Strategy at Domino Data Lab. “Combine the desire to scale AI to all parts of the business, with increasing regulation at an international, state and even municipal level, and it becomes more important than ever for organizations to govern the development and deployment of AI effectively.”

AI Governance Emerges as a Strategic Priority, but Resources Lag Behind

The survey illustrates that responsible AI is now a top business priority, with 43% of leaders rating it as “extremely critical” to driving business outcomes, outpacing traditional priorities like business intelligence. Nevertheless, resource shortages remain a major obstacle. Despite these efforts, nearly half of survey respondents (48%) continue to cite resource constraints as the biggest barrier to implementing effective AI governance, alongside having insufficient technology to govern AI (38%).

High Stakes: The Costs of Inadequate AI Governance

The risks of failing to properly govern AI are substantial. The survey found that regulatory violations are the top concern for 49% of respondents, with potential fines under regulations like the EU AI Act reaching as high as 7% of global annual revenue. Beyond regulatory concerns, 46% of respondents fear reputational damage and stalled innovation if governance issues are not addressed.

Financial costs also weigh heavily on organizations, with 34% of respondents reporting increased operational expenses due to errors in poorly governed AI systems.

Balancing Innovation and Regulation

While there is broad support for AI regulations, with 71% of AI leaders believing that regulations will ensure the safe use of AI, there is concern that overly strict governance might slow down innovation. Nearly half (44%) of respondents worry that regulations could hamper AI adoption.

The survey also reveals a divide in opinions on the current state of AI governance: 51% of respondents doubt that existing regulatory frameworks are enforceable in their current form, highlighting the ongoing need for better-defined and implementable standards.

The Path Forward: Implementing AI Governance Frameworks

To address the governance challenges, many organizations are prioritizing frameworks that translate responsible AI principles into practice. The survey found that 47% of companies are focused on defining responsible AI principles, while 44% are deploying governance platforms to ensure policies are applied consistently across the AI lifecycle. The practice of forming AI ethics boards ranks significantly lower; only 29% ranked it a top approach to implementing responsible AI. Additionally, logging and auditability (74%), reproducibility (68%), and monitoring (61%) emerged as the most critical capabilities needed to support responsible AI.

Beacon, NY, Dec 20, 2024– DocuWare unveils its AI-powered Intelligent Document Processing (DocuWare IDP), bringing about unprecedented improvements...
85% of IT decision makers surveyed reported progress in their companies’ 2024 AI strategy, with 47% saying they have already achieved positive ROI.

MSPs will invest in more AI security forecasting

Posted 5 days ago by Phil Alsop
Predictive maintenance and forecasting for security and failures will be a growing area for MSPs with an interest in security, says Nicole Reineke,...

Machine identities next big target for cyberattacks

Posted 6 days ago by Phil Alsop
Venafi has published the findings of its latest research report: The Impact of Machine Identities on the State of Cloud Native Security in 2024....
Nearly 50% of organisations have experienced a security breach in the last two years.

IT professionals recognise lack of gender diversity

Posted 6 days ago by Phil Alsop
The majority (87 percent) of IT professionals agree that there is a lack of gender diversity in the sector, yet less than half (41 percent) of...

A moving landscape for MSPs

Posted 1 week ago by Phil Alsop
2025 predictions from Ranjan Singh, chief product officer at Kaseya.

Data breach epidemic takes its toll

Posted 1 week ago by Phil Alsop
New study by Splunk shows that a significant number of UK CISOs are stressed, tired, and aren’t getting adequate time to relax.