How RAG can help organisations overcome unrealistic expectations of AI

By Philip Miller, AI Strategist, Progress Software

  • Thursday, 28th August 2025 Posted 2 hours ago in by Aaron Sandhu

Across the UK and beyond, organisations are racing to adopt AI at scale. From customer service and operations to marketing and product development, the technology is being integrated across industries to automate tasks, personalise customer experiences and handle repetitive processes.

Conversations about AI have shifted from ‘if’ to ‘how fast’, with some organisations even going AI-first. Others are adopting a more cautious approach and keeping the technology at arm’s length until they can be confident of its reliability.

The responses these models produce are being held to an increasingly higher standard as a result, with many expecting AI to deliver near perfect results every time. This is leading to the emergence of a double standard in how we judge the technology versus an employee, eroding trust, slowing adoption and stifling innovation.

Ultimately, the way organisations use AI requires a rethink. This starts with lowering expectations and adopting the technology responsibly. This includes focusing on small use cases, iterating quickly and avoiding over-reliance on single models. Retrieval-augmented generation (RAG) can also play a crucial role in ensuring the responses AI produces are grounded in data and that output is both contextually relevant and trustworthy. 

A shift in mindset

As AI becomes increasingly integrated into daily business, tools like RAG are vital for accuracy; equally important is refining how we use the technology. When a colleague makes a mistake, we see it as part of the learning process. When AI ‘hallucinates’ or delivers an imperfect answer, many conclude the technology isn’t ready for wider organisational use. However, these errors aren’t bugs in the system; they’re an expected trade-off of models that work in probabilities, not certainties. Expecting flawless performance is like hiring a new employee and demanding they never make a single mistake.

To make AI work at scale, organisations need to stop thinking in binary terms - that AI must be either perfectly right or totally wrong. Instead, the focus should be on how AI is used, such as the questions we ask it, the safeguards we put in place and how we integrate its outputs into decision-making with human oversight. Success comes from embracing AI as agile and iterative. These models can fail, learn and improve in days or even minutes, far faster than human learning cycles. This agility means our approach to deploying AI should be equally flexible.

Organisations that take a cautious, multi-year, top-down transformation plan risk getting stuck in decision paralysis, waiting for a ‘perfect’ version of AI that may never arrive. Instead, organisations need short-term, incremental projects that can deliver measurable value quickly, before scaling from there.

Practical steps for responsible AI adoption

Adopting AI responsibly requires translating this agile mindset into concrete, manageable actions that deliver results while building trust, built around a human-centric approach and outcomes.

While every organisation’s journey is unique, there are common practices that can help bridge the trust gap and accelerate adoption without compromising on accuracy or ethics. Focusing on achievable goals is key. By targeting use cases that can be delivered in weeks or month, such as improving internal processes or automating routine documentation, organisations can generate early wins that demonstrate tangible value and build confidence in the technology.

AI models are inherently imperfect, so each mistake should be treated as a learning opportunity. Analysing errors, refining prompts or experimenting with different models helps improve performance over time. Small, incremental adjustments allow teams to continuously enhance results while keeping projects manageable.

Once initial projects deliver clear benefits, adoption can expand gradually to other processes or departments. Maintaining oversight and governance ensures outputs remain accurate, relevant and aligned with ethical standards, allowing organisations to scale AI effectively while minimising risk.

The role of retrieval-augmented generation (RAG) in building trust.

One of the most effective ways to improve AI reliability is through RAG. In a RAG setup, AI systems access relevant, up-to-date information from a variety of sources such as documents, video, audio and emails, before generating a response. This ensures that outputs are anchored in verified, contextually accurate data rather than relying solely on patterns learned during training, which may be incomplete or outdated.

By connecting AI to the right data in the right way, organisations can reduce hallucinations, deliver context-aware answers, and increase stakeholder confidence; critical steps for responsible adoption at scale. Embedding a culture of careful, iterative AI use complements RAG, creating a feedback loop that further strengthens trust and ensures insights are actionable and reliable across the organisation.

Closing thoughts

Ultimately, adopting AI is as much about managing expectations as it is about managing technology. AI will make mistakes, as do humans. The question is whether we design our systems, workflows and governance to catch those mistakes, learn from them and improve over time. AI is not magic. It’s a powerful tool with clear strengths and limitations. Organisations that acknowledge both, and design around them, will be the ones who stand the test of time.