Infinidat introduces RAG Workflow Deployment Architecture

Infinidat has introduced its Retrieval-Augmented Generation (RAG) workflow deployment architecture to enable enterprises to fully leverage generative AI (GenAI).

  • Thursday, 14th November 2024 Posted 1 week ago in by Phil Alsop

This dramatically improves the accuracy and relevancy of AI models with up-to-date, private data from multiple company data sources, including unstructured data and structured data, such as databases, from existing Infinidat platforms.

With Infinidat’s RAG architecture, enterprises utilise Infinidat’s existing InfiniBox® and InfiniBox™ SSA enterprise storage systems as the basis to optimise the output of AI models, without the need to purchase any specialised equipment. Infinidat also provides the flexibility of using RAG in a hybrid multi-cloud environment, with InfuzeOS™ Cloud Edition, making the storage infrastructure a strategic asset for unlocking the business value of GenAI applications for enterprises.

“Infinidat will play a critical role in RAG deployments, leveraging data on InfiniBox enterprise storage solutions, which are perfectly suited for retrieval-based AI workloads,” said Eric Herzog, CMO at Infinidat. “Vector databases that are central to obtaining the information to increase the accuracy of GenAI models run extremely well in Infinidat’s storage environment. Our customers can deploy RAG on their existing storage infrastructure, taking advantage of the InfiniBox system’s high performance, industry-leading low latency, and unique Neural Cache technology, enabling delivery of rapid and highly accurate responses for GenAI workloads.”

RAG augments AI models using relevant and private data retrieved from an enterprise’s vector databases. Vector databases are offered by a number of vendors, such as Oracle, PostgreSQL, MongoDB and DataStax Enterprise. These are used during the AI inference process that follows AI training. As part of a GenAI framework, RAG enables enterprises to auto-generate more accurate, more informed and more reliable responses to user queries. It enables AI learning models, such as a Large Language Model (LLM) or a Small Language Model (SLM), to reference information and knowledge that is beyond the data on which it was trained. It not only customises general models with a business’ most updated information, but it also eliminates the need for continually re-training AI models, which are resource intensive.

“Infinidat is positioning itself the right way as an enabler of RAG inferencing in the GenAI space,” said Marc Staimer, President of Dragon Slayer Consulting. “Retrieval-augmented generation is a high value proposition area for an enterprise storage solution provider that delivers high levels of performance, 100% guaranteed availability, scalability, and cyber resilience that readily apply to LLM RAG inferencing. With RAG inferencing being part of almost every enterprise AI project, the opportunity for Infinidat to expand its impact in the enterprise market with its highly targeted RAG reference architecture is significant.”

“Infinidat is bringing enterprise storage and GenAI together in a very important way by providing a RAG architecture that will enhance the accuracy of AI. It makes perfect sense to apply this retrieval-augmented generation for AI to where data is actually stored in an organisation’s data infrastructure. This is a great example of how Infinidat is propelling enterprise storage into an exciting AI-enhanced future,” said Stan Wysocki, President at Mark III Systems.

Fine-tuning AI in the Enterprise Storage Infrastructure

Inaccurate or misleading results from a GenAI model, referred to as “AI hallucinations,” are a common problem that have held back the adoption and broad deployment of AI within enterprises. An AI hallucination may present inaccurate information as “fact,” cite non-existent data, or provide false attribution – all of which tarnish AI and expose a gap that calls for the continual refinement of data queries. A focus on AI models, without a RAG strategy, tends to rely on a large amount of publicly available data, while under-utilising an enterprise’s own proprietary data assets.

To address this major challenge in GenAI, Infinidat is making its architecture available for enterprises to continuously refine a RAG pipeline with new data, thereby reducing the risk of AI hallucinations. By enhancing the accuracy of AI model-driven insights, Infinidat is helping to advance the fulfillment of the promise of GenAI for enterprises. Infinidat’s solution can encompass any number of InfiniBox platforms and enables extensibility to third-party storage solutions via file-based protocols such as NFS.

In addition, to simplify and accelerate the rollout of RAG for enterprises, Infinidat integrates with the cloud providers, using its award-winning InfuzeOS Cloud Edition for AWS and Azure to make RAG work in a hybrid cloud configuration. This complements the work that hyperscalers are doing to build out LLMs on a larger scale to do the initial training of the AI models. The combination of AI models and RAG is a key component for defining the future of generative AI.

Guardz expands in EMEA

Posted 1 day ago by Phil Alsop
Through a new partnership with Infinigate Cloud, Guardz will help to secure SMBs and support the MSP community across EMEA.
Data centre operators can now achieve the unparalleled speeds needed for the most demanding Artificial Intelligence (AI) applications, thanks to a...

Dell Technologies boosts AI for enterprises

Posted 1 day ago by Phil Alsop
Dell Technologies continues to make enterprise AI adoption easier with the Dell AI Factory, expanding the world’s broadest AI solutions portfolio....

AMD accelerates Exascale Computing

Posted 1 day ago by Phil Alsop
El Capitan, powered by the AMD Instinct MI300A APU, becomes the second AMD supercomputer to surpass the Exascale barrier, placing #1 on the Top500...
Global system integrator won over by simplicity, security and speed of the Cloudbrink service.
The Seeq platform will be leveraged to maximize production and increase energy efficiency across the largest biorefinery in Europe.
This global service forms part of the recently launched Intelligent Security portfolio and increases Logicalis' proactive threat-hunting capabilities...

Pure Storage invests in CoreWeave

Posted 3 days ago by Phil Alsop
Pure Storage and CoreWeave have announced Pure Storage’s strategic investment in CoreWeave to accelerate AI cloud services innovation. Alongside...