Will AI Agents Collapse the App Stack?

By Mohammad Ismail, VP of EMEA, Cequence Security.

  • Monday, 18th August 2025 Posted 3 hours ago in by Phil Alsop

Much has been made of the fact that AI will destroy search, and ad revenue business models along with it, due to AI crawler bots pillaging sites and offering up information in so-called ‘zero click’ searches. But there are also other familiar conduits that are likely to fall by the wayside as AI disrupts the status quo. One notable casualty could be the app stack or the software tools and technologies which are used to build, run and manage applications, which is expected to fracture due to the way AI seeks information.

Search engines such as Google and Bing have been quick to adapt to and harness AI which means traffic traversing the internet will increasingly become machine-to-machine (M2M) driven. Instead of humans entering a search term and accessing information via a browser, the traffic flowing to websites and repositories etc will originate from agents. 

We can expect a similar chain of events when LLMs and Agentic AI apps are prompted and seek to gain access to not just information but tools or services. Those agents will take the path of least resistance, cutting across user interfaces, skipping web and mobile layers and heading straight for the Application Programming Interfaces (APIs) that hold the key to what they seek. In effect, they’ll make some of those well-used pathways all but obsolete.

Such short cuts will have a profound effect because many of the security processes that are currently in place were built on the assumption that they would be governing interactions with humans - not M2M. 

Identity management and other associated controls such as zero trust network architecture (ZTNA) are a case in point. These aim to verify if access requests are coming from a human and if that human has the appropriate level of clearance. Similarly, incident detection and response (IDR) tools are designed to look for indicators of compromise or events that signal whether an access attempt is human or bot driven.

A bot with a brain

The question then becomes how do you authenticate and authorise what is essentially a bot with a brain? Every security control you have is going to reject that, unless you look to find a way to verify and authenticate access attempts made by ‘good’ bots. It then becomes a matter of identifying which bots will be allowed to progress and to coral them so that they don’t just crawl all over the stack. 

Inroads have been made into finding a solution to this problem in the financial sector which has been grappling with the problem of how to facilitate M2M payments. Agents need to be able to prove their identity as part of this process, so payment providers have found a way to assign them an identifier and a programmable wallet with funding sources governed by enterprise-grade payment rules. This then allows the agent to verify, login and pay for goods and services over peer-to-peer connections just as a human would. 

Combining the third-party verification with API protection over a shared framework further bolsters defences by blocking malicious bots that seek to scrape, abuse or defraud the site. Traffic patterns and usage flows are continually analysed to identify abnormal behaviour or intent, with the results then used to fine tune detection. And because this native enforcement happens at the edge, without the need to modify apps or bolt-on third party tools, these security measures are not just adaptive but non-intrusive. 

The new middleman: MCP

We are, however, also now seeing the introduction of a new layer in the app stack that also aims to facilitate easier access to apps. An open standard developed by Anthropic back in November, Model Context Protocol (MCP) enables LLMs and agents to communicate with APIs without the need to resort to code. MCP servers essentially act as the middleman, much like APIs did for apps, enabling the AI to access information, tools and services that would previously have taken months to build interfaces for. 

As a result, MCP is set to become the connective tissue between AI and enterprise systems and will boost the adoption of autonomous aka Agentic AI. But MCP is so new that few organisations are aware of its fallibilities. As we saw with the introduction of APIs, any new means of access must be secured to prevent issues. 

Reports have already emerged of MCP servers leaking data. In June, Asana alerted its users to a logic flaw that could potentially lead to data being exposed to other MCP users and, separately, security researchers found more than 7,000 MCP servers exposed to the internet, hundreds of exhibited the ‘NeighborJack’ vulnerability that exposed users on the same local network. Around one percent of the MCP servers were found to have severe flaws ranging from unchecked input handling to excessive permissions which could effectively see an AI agent given free reign.

So, while MCP servers could help channel AI and prevent it from running riot across the app stack, their use is not without its own hazards. Those seeking to deploy MCP servers are therefore advised to look at implementing an AI gateway, a means of safely opening enterprise applications to agentic AI via MCP. It provides identity-based access to systems and data while preventing unauthorised AI agent access and monitors AI-to-API traffic by tracking agent behaviour, access to applications and the API calls being made. 

Runaway rollouts

API gateways also reduce the pain associated with the rollout of Agentic AI by eliminating the need for backend integration, speeding time to deployment. We’re continually hearing from organisations that have sunk time and money into these deployments. They either have little to show for it or find they have connected agents to their core systems without putting in place sufficient authentication or security. They then lack the controls and oversight needed to protect those connections and to prevent the risk of data leakage, misuse and non-compliance. An AI gateway can mitigate those risks.

But to return to the question, ‘Will AI collapse the app stack?’ Agent-to-API traffic will undoubtedly become more direct, relegating web and mobile when it comes to traffic monitoring, while authentication and authorisation mechanisms will need to be revised to accommodate machine access. That’s not going to happen overnight which is why current AI deployments will need to have separate provisions in place, such as the AI gateway to monitor and control access via MCP. Adopting a easily configurable AI gateway that can keep pace with future protocol revisions is therefore vital in protecting the business as the stack fractures and reforms.

In short, the enterprise needs to change its mindset from thinking ‘the future of apps is apps’ to ‘the future of apps is agents’. As agents become autonomous actors executing complex, multi-step tasks via APIs they will seek to take the swiftest route; it’s down to the business to put in place the controls needed to authenticate and monitor those agents and their activity or risk the consequences. 

By Rob O’Connor, EMEA President at Insight.
By Ian Tickle, Chief of Global Field Operations at Freshworks.

Don’t delay on a data storage strategy

Posted 1 hour ago by Phil Alsop
By Terry Storrar, managing director, Leaseweb UK.
By Ade Taylor, Head of Security Services, Roc Technologies.
By Daniel Bailey, SVP Genesys EMEA.

How effective is your ‘Cyber Stack’?

Posted 5 hours ago by Phil Alsop
By Eric Herzog, CMO at Infinidat.
By Nadeem Azhar, CEO of PC.Solutions.Net.
By Lorenzo Romano, CEO of GCX Managed Services.