Why most agentic AI projects fail, and how to avoid being one of them

By Martin Tombs, Field CTO EMEA, Qlik.

  • Tuesday, 17th March 2026 Posted 1 hour ago in by Phil Alsop

As businesses get used to using generative AI, attention is quickly turning to agentic AI. These systems are designed to plan tasks, interpret information and take action within defined guardrails. In theory, this moves AI from a tool that assists employees to one that helps run parts of the business.

Investment is rising fast, with McKinsey predicting that the agentic AI market will rise from roughly $5-7 billion in 2024 to over $199 billion by 2034. But many businesses are finding it harder than expected to turn early pilots into something reliable and useful at scale. 

Gartner predicts that more than 40% of agentic AI projects will be cancelled by the end of 2027. Meanwhile, Qlik found that 97% of organisations have committed budget to agentic AI, but only 18% are fully deploying it. Many see the potential, yet practical deployment still proves difficult when systems are expected to operate reliably in real business environments.

When AI starts acting inside workflows 

Early generative AI tools largely acted as assistants. Employees used them to answer questions, summarise documents or draft content. If the response was slightly wrong, the impact was usually limited.

Agentic systems operate differently. They can interpret signals, recommend next steps and carry out tasks across enterprise systems. In practice, this might involve identifying unusual changes in financial performance, triggering a supply chain adjustment or initiating an operational workflow.

Once AI interacts directly with business processes, the margin for error becomes much smaller. A generative AI recommendation can be reviewed before action is taken, but an automated workflow requires far greater confidence in the information and logic behind it.

This is where many businesses discover their underlying data foundations are not ready. 

Fixing the data foundations first

The most common reason agentic AI projects stall is a lack of data maturity. Agents depend on a consistent and trusted view of information across the organisation, yet many businesses still operate with fragmented data, duplicated sources and unclear ownership. In these conditions, even the strongest AI models struggle to produce outputs that teams can comfortably rely on. 

Unstructured information adds another layer of complexity. Internal documents, emails and knowledge bases often contain useful context but rarely have clear ownership. That makes it difficult to verify whether the information is current, accurate or even still relevant when an AI agent draws on it.

As agents begin interacting with operational systems, these weaknesses become more visible. If the information feeding those systems is inconsistent or outdated, the reliability of the agent’s outputs quickly comes into question. Strengthening those data foundations is often the first step before agentic AI can be deployed with confidence.

Who is responsible when AI takes action

As agents take on more responsibility, governance becomes a practical issue rather than a theoretical one. Organisations need clear answers to some basic questions. Who owns the data feeding the system? Who signs off on actions an agent takes? And when should a person step in and review a decision?

Clear accountability helps teams trust the system implemented and reduces the risk of mistakes. It also makes it possible to understand how decisions were reached, which matters when AI outputs affect revenue, compliance or business planning.

Regulation can help provide structure here. Europe’s AI rules, including the EU AI Act, aim to set expectations around transparency, accountability and risk early in the development of AI systems. While regulation is sometimes seen as slowing innovation, clearer rules can make it easier for organisations to use AI responsibly. 

Getting AI tools to work together

Another challenge emerging with agentic AI is the growing number of assistants operating across a business. Most organisations are not relying on a single model or platform. Different teams often use different AI tools depending on their needs, from analytics platforms to internal systems and external assistants.

For agents to work effectively in that environment, they need secure ways to access trusted data and interact with other systems. Without that connection, agents operate in isolation and their usefulness quickly becomes limited.

This is where shared standards are starting to play a role. Technologies such as Model Context Protocol (MCP) allow AI assistants to connect with enterprise platforms while keeping access controls and governance in place. Instead of building custom integrations for every tool, organisations can expose data and analytics through consistent interfaces that different assistants can use.

As more AI tools enter the workplace, making sure they can work together and access reliable data will become increasingly important. Organisations that plan for this early will find it much easier to scale agentic systems across the business.

Building agentic AI that works 

Agentic AI has the potential to completely change how organisations operate for the better. But success depends on prepare the systems underneath first, putting the right data, accountability and controls in place before scaling beyond pilots.

By Sam Manjarres, Director of Product Marketing at WatchGuard Technologies.
Sam Kirkman, Director of Services, EMEA at NetSPI, on why organisations need proof, not promise, when it comes to third-party security.
Paul Swaddle, Product Manager at Serios Group and Lucy Batley, founder of AI consultancy Traction Industries discuss how organisations can combine...
By Brett Candon, VP International at Dropzone AI.

Momentum over noise: what MSPs really need from 2026

Posted 6 days ago by Sophie Milburn
By Will Morey, Managing Director at Gamma Business.
By Matt Middleton-Leal, Regional Vice President EMEA, Qualys
By David Byrnes, VP of Global Channels, Kiteworks.
By George Tziahanas, AGC and VP of Compliance at Archive360.