New data begs the question - is an AI power struggle in the C-suite imminent?

More than 69% of executives expect to spearhead their organisation’s AI efforts.

  • Thursday, 20th June 2024 Posted 3 months ago in by Phil Alsop

Recent research from Dialpad found that over 69% of executives in the C-suite reporting to the CEO - or what is commonly known as CxOs - are expecting to spearhead their organisation’s AI efforts. The research also found that three quarters (75%) of respondents believe AI will have a significant impact on their roles in the next three years, and more than 69% are already using the technology. Does this mean that an AI power struggle is imminent in the C-suite?

It’s no surprise that AI is top of mind for the C-Suite, but with everyone expecting to lead the charge - does this risk confusion, duplication of efforts, and even a power struggle over who holds the keys to an organisation’s AI-related decisions? And what can help tackle these challenges and cut out confusion?

The case for Chief AI Officers

Whilst most (69%) of executives are already using AI, this doesn’t mean they all feel comfortable with the technology. In fact, there are some notable concerns, with 54% of leaders worried about AI regulation and 38% moderately to extremely concerned about AI in general. In light of this, the case for Chief AI Officers (CAIOs) is a strong one - something which the Biden administration in the US has called for recently. In theory, a CAIO will be able to take on these concerns and act as the main point of contact and the final authority on all things AI. It will be their responsibility to understand regulatory developments, shielding other executives and teams, and to dictate how AI is used across the business. Crucially, a CAIO can also stop any AI ‘power struggle’ from forming, acting as a figurehead for AI-related decisions and planning in a business.

“We will likely begin to see other governments echo the Biden administration’s call for more Chief AI Officer roles to be created,” said Jim Palmer, CAIO of Dialpad. “The CAIO role is one that can manage and mitigate much of the risk that comes with AI development, ensuring privacy and security standards are met and that customers understand how and why data is being used. The specifics of the CAIO role are far from fully mapped out, but as the AI boom continues, this will no doubt change in the coming months and years.”

Too many tools

The research also found that, often, leaders across the business are using multiple AI solutions - 33% executives are using at least two AI solutions, 15% are using three, and 10% are using four or more. If multiple different AI solutions are being used across a business, it runs a real risk of duplication across different departments - meaning companies are often unnecessarily paying for tools with the same functionality. Disparate tools and solutions can also tamper with a single source of truth, making it harder to align teams around the same data, goals and objectives - fuelling the potential AI power struggle among executives even further.

This is another area where the role of a CAIO can be so important, reducing duplication and ensuring the business invests smartly into AI.

Data protection and practices

Half (50%) of executives are moderately to extremely concerned about the possibility of a data leakage. On top of this, they have security concerns (22%), accuracy worries (18%), and fears over the cost of models, compute power, and expertise (12%). Additionally, 91% of companies will determine they do not have enough data to achieve a level of precision their stakeholders will trust.

It’s clear that, for all its value, AI is still full of unknowns for many leaders. A CAIO can, again, take on the responsibility to tackle these concerns head on. Alongside this, partnering with AI native companies that develop their own, proprietary AI services can offer much reassurance. Companies built upon their own proprietary AI stack will have greater control over data, privacy, security, performance and cost – key topics of discussion by organisations and regulators currently. Additionally, partnering with an AI native should, in theory, ensure more accuracy. Why? Because, while public AI is trained off the entire world wide web of data, AI natives can train their models on data that is relevant - making it faster and more accurate. It's like writing a history book with guaranteed facts versus a combination of facts and lies, with no way to distinguish between the two.

By embracing distributed leadership, fostering alignment, and standardising AI tools, businesses can navigate the evolving landscape of AI with confidence and clarity. 

Data integrity study from Precisely and Drexel University’s LeBow College of Business exposes widespread data trust issues and its impact on data...

Modernisation challenges hinder IT team productivity

Posted 19 hours ago by Phil Alsop
While modernisation challenges impact productivity and business success, survey finds that all IT decision makers are turning to experienced partners...
N-able data reveals a related increase in backups among its partners, helping them become more ransomware-resilient to address the growing number of...
A new report from change and transformation specialist, Grayce, highlights AI as a major priority for C-Suite professionals and their teams, with...
Research released recently shows that 67% of IT decision makers favour a hybrid hosting infrastructure over a “cloud-first” strategy and 94% of...
New 3M study finds that despite growing global presence of AI, the technology is drastically underutilised in UK workplaces today.
Although 85% of total respondents have integrated AI apps into tech stacks in the past year, most (68%) have experienced issues with their...
Venafi has released a new research report, Organizations Struggle to Secure AI-Generated and Open Source Code. The report explores the risks of...