New data begs the question - is an AI power struggle in the C-suite imminent?

More than 69% of executives expect to spearhead their organisation’s AI efforts.

  • Thursday, 20th June 2024 Posted 6 months ago in by Phil Alsop

Recent research from Dialpad found that over 69% of executives in the C-suite reporting to the CEO - or what is commonly known as CxOs - are expecting to spearhead their organisation’s AI efforts. The research also found that three quarters (75%) of respondents believe AI will have a significant impact on their roles in the next three years, and more than 69% are already using the technology. Does this mean that an AI power struggle is imminent in the C-suite?

It’s no surprise that AI is top of mind for the C-Suite, but with everyone expecting to lead the charge - does this risk confusion, duplication of efforts, and even a power struggle over who holds the keys to an organisation’s AI-related decisions? And what can help tackle these challenges and cut out confusion?

The case for Chief AI Officers

Whilst most (69%) of executives are already using AI, this doesn’t mean they all feel comfortable with the technology. In fact, there are some notable concerns, with 54% of leaders worried about AI regulation and 38% moderately to extremely concerned about AI in general. In light of this, the case for Chief AI Officers (CAIOs) is a strong one - something which the Biden administration in the US has called for recently. In theory, a CAIO will be able to take on these concerns and act as the main point of contact and the final authority on all things AI. It will be their responsibility to understand regulatory developments, shielding other executives and teams, and to dictate how AI is used across the business. Crucially, a CAIO can also stop any AI ‘power struggle’ from forming, acting as a figurehead for AI-related decisions and planning in a business.

“We will likely begin to see other governments echo the Biden administration’s call for more Chief AI Officer roles to be created,” said Jim Palmer, CAIO of Dialpad. “The CAIO role is one that can manage and mitigate much of the risk that comes with AI development, ensuring privacy and security standards are met and that customers understand how and why data is being used. The specifics of the CAIO role are far from fully mapped out, but as the AI boom continues, this will no doubt change in the coming months and years.”

Too many tools

The research also found that, often, leaders across the business are using multiple AI solutions - 33% executives are using at least two AI solutions, 15% are using three, and 10% are using four or more. If multiple different AI solutions are being used across a business, it runs a real risk of duplication across different departments - meaning companies are often unnecessarily paying for tools with the same functionality. Disparate tools and solutions can also tamper with a single source of truth, making it harder to align teams around the same data, goals and objectives - fuelling the potential AI power struggle among executives even further.

This is another area where the role of a CAIO can be so important, reducing duplication and ensuring the business invests smartly into AI.

Data protection and practices

Half (50%) of executives are moderately to extremely concerned about the possibility of a data leakage. On top of this, they have security concerns (22%), accuracy worries (18%), and fears over the cost of models, compute power, and expertise (12%). Additionally, 91% of companies will determine they do not have enough data to achieve a level of precision their stakeholders will trust.

It’s clear that, for all its value, AI is still full of unknowns for many leaders. A CAIO can, again, take on the responsibility to tackle these concerns head on. Alongside this, partnering with AI native companies that develop their own, proprietary AI services can offer much reassurance. Companies built upon their own proprietary AI stack will have greater control over data, privacy, security, performance and cost – key topics of discussion by organisations and regulators currently. Additionally, partnering with an AI native should, in theory, ensure more accuracy. Why? Because, while public AI is trained off the entire world wide web of data, AI natives can train their models on data that is relevant - making it faster and more accurate. It's like writing a history book with guaranteed facts versus a combination of facts and lies, with no way to distinguish between the two.

By embracing distributed leadership, fostering alignment, and standardising AI tools, businesses can navigate the evolving landscape of AI with confidence and clarity. 

Beacon, NY, Dec 20, 2024– DocuWare unveils its AI-powered Intelligent Document Processing (DocuWare IDP), bringing about unprecedented improvements...
85% of IT decision makers surveyed reported progress in their companies’ 2024 AI strategy, with 47% saying they have already achieved positive ROI.

MSPs will invest in more AI security forecasting

Posted 1 week ago by Phil Alsop
Predictive maintenance and forecasting for security and failures will be a growing area for MSPs with an interest in security, says Nicole Reineke,...

Machine identities next big target for cyberattacks

Posted 1 week ago by Phil Alsop
Venafi has published the findings of its latest research report: The Impact of Machine Identities on the State of Cloud Native Security in 2024....
Nearly 50% of organisations have experienced a security breach in the last two years.

IT professionals recognise lack of gender diversity

Posted 1 week ago by Phil Alsop
The majority (87 percent) of IT professionals agree that there is a lack of gender diversity in the sector, yet less than half (41 percent) of...

A moving landscape for MSPs

Posted 1 week ago by Phil Alsop
2025 predictions from Ranjan Singh, chief product officer at Kaseya.

Data breach epidemic takes its toll

Posted 1 week ago by Phil Alsop
New study by Splunk shows that a significant number of UK CISOs are stressed, tired, and aren’t getting adequate time to relax.