97% of AI leaders commit to responsible AI

Domino’s 2024 REVelate survey reveals that despite high awareness of responsible AI's importance, a critical governance gap persists, threatening innovation and compliance across industries.

  • Sunday, 22nd September 2024 Posted 1 year ago in by Phil Alsop

Domino Data Lab has released its 2024 REVelate survey report, which uncovers a troubling disconnect between AI ambitions and the resources required to execute responsible AI governance. The survey, which included responses from 117 AI leaders, reveals that while 97% of organizations have set goals for responsible AI, nearly half (48%) are under-resourced to implement the necessary governance frameworks.

The findings highlight a growing readiness gap in the enterprise AI landscape, where responsible AI is increasingly seen as essential for innovation and compliance, but where resource constraints prevent full execution.

“Despite the growing recognition of responsible AI’s value, many enterprises are ill-equipped to enforce governance, risking financial penalties, reputational damage, and stunted innovation,” said Dr. Kjell Carlsson, head of AI Strategy at Domino Data Lab. “Combine the desire to scale AI to all parts of the business, with increasing regulation at an international, state and even municipal level, and it becomes more important than ever for organizations to govern the development and deployment of AI effectively.”

AI Governance Emerges as a Strategic Priority, but Resources Lag Behind

The survey illustrates that responsible AI is now a top business priority, with 43% of leaders rating it as “extremely critical” to driving business outcomes, outpacing traditional priorities like business intelligence. Nevertheless, resource shortages remain a major obstacle. Despite these efforts, nearly half of survey respondents (48%) continue to cite resource constraints as the biggest barrier to implementing effective AI governance, alongside having insufficient technology to govern AI (38%).

High Stakes: The Costs of Inadequate AI Governance

The risks of failing to properly govern AI are substantial. The survey found that regulatory violations are the top concern for 49% of respondents, with potential fines under regulations like the EU AI Act reaching as high as 7% of global annual revenue. Beyond regulatory concerns, 46% of respondents fear reputational damage and stalled innovation if governance issues are not addressed.

Financial costs also weigh heavily on organizations, with 34% of respondents reporting increased operational expenses due to errors in poorly governed AI systems.

Balancing Innovation and Regulation

While there is broad support for AI regulations, with 71% of AI leaders believing that regulations will ensure the safe use of AI, there is concern that overly strict governance might slow down innovation. Nearly half (44%) of respondents worry that regulations could hamper AI adoption.

The survey also reveals a divide in opinions on the current state of AI governance: 51% of respondents doubt that existing regulatory frameworks are enforceable in their current form, highlighting the ongoing need for better-defined and implementable standards.

The Path Forward: Implementing AI Governance Frameworks

To address the governance challenges, many organizations are prioritizing frameworks that translate responsible AI principles into practice. The survey found that 47% of companies are focused on defining responsible AI principles, while 44% are deploying governance platforms to ensure policies are applied consistently across the AI lifecycle. The practice of forming AI ethics boards ranks significantly lower; only 29% ranked it a top approach to implementing responsible AI. Additionally, logging and auditability (74%), reproducibility (68%), and monitoring (61%) emerged as the most critical capabilities needed to support responsible AI.

ProxySmart's SIM farm network as a global fraud enabler

Posted 18 hours ago by Sophie Milburn
Infrawatch reports on ProxySmart’s SIM-farm operations and their potential role in online fraud and security risks.
As AI eases manual burdens for IT teams, it simultaneously brings added pressures and responsibilities.

Addressing AI-driven gaps in disaster recovery planning

Posted 19 hours ago by Sophie Milburn
Keepit survey unveils the chasm between confidence and verification in AI disaster readiness, underscoring risks and the necessity for enhanced...

Slide roadshow brings hands-on BCDR for MSPs

Posted 21 hours ago by Sophie Milburn
Slide is showcasing its business continuity and disaster recovery (BCDR) platform for MSPs through a partner-focused roadshow across the U.S. and...
A gap exists between executive enthusiasm for AI and employee trust in these tools, alongside the use of unsanctioned AI applications.
More than half of UK business leaders face challenges from AI-powered cyber threats, with many unprepared for the technological shift.
Kaseya reveals insights into the shifting MSP sector, spotlighting AI as pivotal amidst rising competition and economic pressures.
Arctic Wolf introduces Decipio, a cybersecurity tool, aiming to catch credential-stealing attempts early to protect networks better.