Contextual data: the secret sauce for reliable AI agents

By Peter Manta, AI Strategy and Practice Director, Informatica by Salesforce.

  • Monday, 30th March 2026 Posted 1 hour ago in by Phil Alsop

Agentic AI is stealing the headlines. Here’s a memorable if not so desirable example for a New Zealand supermarket chain: “Supermarket AI meal planner app suggests recipe that would create chlorine gas” (The Guardian, August 10th 2023).

The bot was designed with the best of intentions: to help customers create recipes from leftovers amid the cost-of-living crisis. And it worked, to an extent… The problem was that, while the AI understood the structure of a recipe, it didn’t ‘get’ the purpose. So, when humans did the human thing, and started entering ingredients like bleach and ant poison, the bot ploughed on regardless, conjuring up meticulously planned but deadly delights like glue sandwiches and French toast à la methanol.

In reality, the missing ingredient was context, which as humans, we take for granted. Humour me, if you will, with another culinary example:

Recently, I took my first trip to London. Naturally, while there, I made sure to sample the UK’s signature dish, fish and chips (I’m a fan!). A week later, I was in Mexico, where I ordered chips again, but got a completely different platter, this time much crispier and served with salsa. On both occasions, despite using the same word in my order, I knew to expect that different things would arrive at my table. Get that expectation wrong, though, and I could plausibly have ended up with a plate full of CPUs (I’ll pass).

So, here’s the crux: AI agents don’t have that innate contextual understanding that people do. And, with so many companies starting to roll out agentic AI, tempted by the competitive advantages it can bring for cost, productivity, and more, there could be surprises on the horizon for data leaders who don’t factor ‘decision quality’ on top of ‘data quality’.

Decision quality is a distinct concept from data quality, and the distinction matters. Data quality asks 'is this record accurate, complete, and current?'. Decision quality asks 'does an AI agent understand enough about this data to act on it appropriately?'. A business glossary entry or catalogue definition is necessary; it tells the agent what a field means, but it isn't sufficient. The agent needs provenance to understand where the value came from, how it was derived, and in what business context it’s intended to be used. Without that, even clean data can produce confidently wrong decisions.

Unchecked contextual failures can snowball quickly

Think about a traditional ML system. If it’s working from flawed data, the worst that will happen is inaccuracy. The chances are its human supervisors will realise that there’s an issue, ascertain and correct the offending data points, re-check the system’s outputs, and carry on with their day.

With a network of AI agents, though, the scenario is quite different. For a start, it might not be clear that there’s a flaw at all. Everything could seem normal until, perhaps weeks down the line, it emerges that an agent has used the available data to make a bad decision. That output has then affected the decisions made by another agent. Then another. And another…

Now, cast your mind back to the supermarket chain. The recipe story, although cringeworthy, is quite funny. But if the AI was also tasked with instructing a food assembly plant to mail those lethal ingredients out to customers, the outcome would be anything but, with dangers suddenly becoming much more than just reputational.

Think inside the (black) box

One of the biggest challenges with agentic AI right now is audit trails. Many compare the systems to ‘black boxes’ due to their lack of transparency. That means tracing a decision-tree back to its ‘bad seed’ can be virtually impossible without specialised observability tools.

That puts data leaders in a pickle. On one hand, they know that if their organisation doesn’t move quickly on agentic AI, the competition will steal a march. On the other hand, the risks of going all-in while underprepared could jeopardise all the potential benefits.

The solution involves everything you’d normally associate with data integrity: governance, metadata, lineage, currency, security, etc. But now those qualities are just table stakes. To achieve a decision-quality foundation means not only knowing what the data says, but why it exists, and where it came from; its context.

Think of it like building a solid ‘truth layer’, the font of all knowledge upon which your agents will rely. Data teams need to know where it lives because, if it’s not right at the source, every action based on it downstream will compound in negative impact.

It’s not a question of satisfying the compliance team. It’s a matter of strategic business advantage. Organisations that can hit all those markers consistently are much better placed to step up to agentic AI with confidence.