Vultr launches Cloud Inference

New serverless Inference-as-a-Service offering available from Vultr across six continents and 32 locations worldwide.

  • Tuesday, 19th March 2024 Posted 1 year ago in by Phil Alsop

Vultr has launched Vultr Cloud Inference. This new serverless platform revolutionizes AI scalability and reach by offering global AI model deployment and AI inference capabilities. Leveraging Vultr’s global infrastructure spanning six continents and 32 locations, Vultr Cloud Inference provides customers with seamless scalability, reduced latency, and enhanced cost efficiency for their AI deployments.

Today's rapidly evolving digital landscape has challenged businesses across sectors to deploy and manage AI models efficiently and effectively. This has created a growing need for more inference-optimised cloud infrastructure platforms with both global reach and scalability, to ensure consistent high performance. This is driving a shift in priorities as organizations increasingly focus on inference spending as they move their models into production. But with bigger models comes increased complexity. Developers are being challenged to optimise AI models for different regions, manage distributed server infrastructure, and ensure high availability and low latency.

With that in mind, Vultr created Cloud Inference. Vultr Cloud Inference will accelerate the time-to-market of AI-driven features, such as predictive and real-time decision-making while delivering a compelling user experience across diverse regions. Users can simply bring their own model, trained on any platform, cloud, or on-premises, and it can be seamlessly integrated and deployed on Vultr’s global NVIDIA GPU-powered infrastructure. With dedicated compute clusters available on six continents, Vultr Cloud Inference ensures businesses can comply with local data sovereignty, data residency, and privacy regulations by deploying their AI applications in regions that align with legal requirements and business objectives.

“Training provides the foundation for AI to be effective, but it's inference that converts AI’s potential into impact. As an increasing number of AI models move from training into production, the volume of inference workloads is exploding, but the majority of AI infrastructure is not optimised to meet the world’s inference needs,” said J.J. Kardwell, CEO of Vultr’s parent company, Constant. “The launch of Vultr Cloud Inference enables AI innovations to have maximum impact by simplifying AI deployment and delivering low-latency inference around the world through a platform designed for scalability, efficiency, and global reach.”

With the capability to self-optimise and auto-scale globally in real-time, Vultr Cloud Inference ensures AI applications provide consistent, cost-effective, low-latency experiences to users worldwide. Moreover, its serverless architecture eliminates the complexities of managing and scaling infrastructure, delivering unparalleled impact, including:

Flexibility in AI model integration and migration: In Vultr Cloud Inference, users can get a straightforward, serverless AI inferencing platform that allows for easy integration of AI models, regardless of where they were trained. For models developed on Vultr Cloud GPUs powered by NVIDIA, in users’ own data centre, or on another cloud, Vultr Cloud Inference enables hassle-free global inference.

Reduced AI infrastructure complexity: By leveraging the serverless architecture of Vultr Cloud Inference, businesses can concentrate on innovation and creating value through their AI initiatives rather than focusing on infrastructure management. Cloud Inference streamlines the deployment process, making advanced AI capabilities accessible to companies without extensive in-house expertise in infrastructure management, thereby speeding up the time-to-market for AI-driven solutions.

Automated scaling of inference-optimised infrastructure: Through real-time matching of AI application workloads and inference-optimised cloud GPUs, engineering teams can seamlessly deliver performance while ensuring the most efficient use of resources. This leads to substantial cost savings and reduced environmental impact, as they only pay for what is needed and used.

Private, dedicated compute resources: With Vultr Cloud Inference, businesses can access an isolated environment for sensitive or high-demand workloads. This provides enhanced security and performance for critical applications, aligning with goals around data protection, regulatory compliance, and maintaining high performance under peak loads.

“Demand is rapidly increasing for cutting-edge AI technologies that can power AI workloads worldwide,” said Matt McGrigg, Director of Global Business Development, Cloud Partners at NVIDIA. “The introduction of Vultr Cloud Inference will empower businesses to seamlessly integrate and deploy AI models trained on NVIDIA GPU infrastructure, helping them scale their AI applications globally.”

Innovations strengthen cyber hygiene by ensuring a risk-based approach to patching and reducing time to patch.

Faster, more comprehensive cyber incident response

Posted 8 hours ago by Phil Alsop
Cohesity has introduced Cohesity RecoveryAgent, a new AI-powered cyber orchestration solution for Cohesity NetBackup and DataProtect customers.
The new partner program sees Logpoint amplifying its strategic focus on Managed Security Service Providers (MSSPs), opening opportunities for MSSPs...

Omnissa launches Partner Program

Posted 1 day ago by Phil Alsop
With a new streamlined framework, the program empowers partners to enhance their service offerings and seize new growth opportunities in the digital...

SnapLogic launches Partner Connect Program

Posted 1 day ago by Phil Alsop
New program offers AI-powered tools, training, and tiered benefits to empower technology and consulting partners.
Syncro has introduced its new Network Discovery solution, a fully integrated tool that automatically detects and manages network-connected devices....
New platform streamlines cyber risk assessments, delivering meaningful improvement roadmaps for c-level and technical teams.

Netapp appoints TD SYNNEX

Posted 1 day ago by Phil Alsop
Extends existing relationship in the UK and Ireland and enables partners to take advantage of TD SYNNEX’s strong pedigree in hybrid cloud solutions.