The Agentic AI Interoperability Gap

By Inna Weiner, VP of Product, Data & AI, AppsFlyer.

  • Sunday, 3rd May 2026 Posted 1 hour ago in by Phil Alsop

Agentic AI is often presented as the next major leap in enterprise productivity. Give an AI agent a goal, connect it to your tools, and let it autonomously analyse, decide, and act. In theory, this promises something every organisation wants: faster execution, fewer manual handoffs, and insights that translate instantly into action. In practice, however, most enterprises are discovering that the hardest part of agentic AI is not the intelligence. It is the infrastructure that sits beneath it. The real challenge is not the model. It is the plumbing.

The real bottleneck is not the brain

Much of the current conversation around agentic AI focuses on model capabilities such as reasoning, planning, memory, and multi-step decision-making. These are important, but they tend to distract from a more stubborn reality. Most enterprise systems were never designed for non-human actors to operate them autonomously.

Enterprise tools were built for humans clicking buttons in graphical interfaces, not for agents making structured API calls across platforms. When an AI agent attempts to orchestrate a workflow that spans collaboration tools, CRM systems, BI dashboards, and external platforms, it quickly runs into friction. Authentication models may differ, permissions can be scoped inconsistently, and data schemas rarely align. 

Worse, these failures often happen quietly. An agent does not always “crash” in a way that triggers alerts. Instead, a field maps incorrectly, a permission scope blocks a write operation, or a rate limit is hit outside business hours. The workflow technically completes, but the outcome is wrong or incomplete. By the time a human notices, the damage has already propagated downstream.

Until organisations recognise agentic AI as an integration architecture problem and not just an AI problem, these failures will persist, especially at the seams between systems where modern enterprises already struggle.

The rise of the “human glue layer”

Because of these limitations, many companies have unintentionally created a stopgap pattern. AI agents analyse data, generate recommendations, and even propose actions. But humans are still responsible for executing those actions across tools. This is what I call the “human glue layer”.

Consider a common scenario. An AI agent detects a performance drop in a marketing channel and recommends shifting 15% of budget to another channel. The insight is correct and timely. But execution still requires a human to log into the advertising platform, apply the change, confirm it, paste the confirmation into a Slack or Teams thread, update a shared spreadsheet, and notify finance. The AI did the thinking, but the human did the doing.

That is not true automation. It is a sophisticated copy-and-paste workflow with AI branding.

This pattern exists because many organisations deployed AI assistants before building the action layer they require. They invested in copilots that can summarise, analyse, and recommend, but not in the governance, permissions, and integration infrastructure needed for agents to safely act. The result is that humans become middleware. Translating AI output into execution across disconnected systems. As a long-term operating model, it is unsustainable.

Why interoperability changes everything

The interoperability gap in agentic AI is not a minor technical inconvenience. It has far-reaching implications for productivity, governance, and workflow design.

From a productivity perspective, the promise of agentic AI is “insight to action in seconds”. The reality, for most organisations today, is “insight to action in hours, with a human copying data across multiple tabs”. As long as agents cannot execute end-to-end workflows reliably, productivity gains will remain largely theoretical.

Governance is where the stakes become higher. In human-driven workflows, governance is enforced implicitly. A person moving data from one system to another exercises judgement about sensitivity, relevance, and approval. When an agent performs the same action hundreds of times per hour, that judgement must be explicitly codified.

Most enterprises today have governance frameworks designed around human behaviour, not autonomous actors. They are not prepared for a world in which agents routinely initiate cross-platform actions, move data between systems, or trigger operational changes at machine speed. Without rethinking governance models, organisations will be forced to choose between risk exposure and stalled automation.

The most underestimated challenge: workflow design

Perhaps the most overlooked implication of agentic AI is its impact on workflow design. Agentic AI does not simply automate existing workflows. It exposes the fact that many existing workflows are fundamentally incompatible with automation. They rely on manual handoffs, UI-dependent steps, undocumented exceptions, and tribal knowledge that lives in people’s heads rather than in auditable systems.

These workflows may function tolerably well for humans, but they break down when an agent attempts to execute them. For agentic AI to work effectively, workflows must be reimagined as composable, API-native sequences with clearly defined inputs, outputs, approval checkpoints, and fallback behaviours.

This requires more than technical integration, it demands organisational introspection. Teams need to ask not just “can an agent do this?” but “should this workflow exist in this form at all?” In many cases, agentic AI becomes the forcing function that reveals just how much operational debt has been quietly accumulating.

A signal, not a shortcut

It is tempting to view agentic AI as a plug-and-play upgrade to an existing technology stack. Deploy the model, connect a few tools, and reap the rewards. That mindset is precisely what leads to fragile systems and disappointed stakeholders.

Agentic AI should instead be treated as a signal. A clear indication that an organisation’s stack, governance structures, and workflows need to evolve together. The companies that succeed will be those that invest not only in AI capabilities, but in the unglamorous work of interoperability such as standardising integrations, modernising permissions, redesigning workflows, and redefining governance for autonomous execution as a platform. Not once per agent - but once for all the agents.

Others will continue to rely on the human glue layer, asking their teams to hold together an increasingly complex AI-powered Rube Goldberg machine. That approach may work for a while. But it does not scale and eventually, will break at the seams.

By Stewart Hunwick, Field CTO, Storage Platforms and Solutions, Dell Technologies.
By Sylvain Cortes, VP Strategy at Hackuity.
By Tony Fergusson, CISO in Residence at Zscaler.
By Graham Jarvis, Freelance Business and Technology Journalist.
Flotek Group has grown rapidly in a market known for complexity and fragmentation. In this exclusive conversation, CEO Jay Ball discusses the...
In an exclusive conversation with Isobelle Coventry, this article explores the significant growth trajectory of Evergreen and the strategy...
This article is based on an exclusive interview with Steve Wilson, Chief AI Officer at Exabeam, exploring how the rise of the digital workforce is...
By René Klein, Executive Vice President, Europe at Westcon-Comstor.