The contribution below is from an external party. The editors are not responsible for the information provided.
©
New Cloud GPU offering delivers business flexibility, performance and cost-efficiency for global scalability of AI-native applications with AMD Instinct™ MI300X
Amsterdam, September 25, 2024 – Vultr, the world’s largest private cloud computing platform, today announced that the new AMD Instinct™ MI300X accelerator and ROCm™ open software will become available within Vultr’s composable cloud solution.
The collaboration between Vultr’s composable cloud infrastructure and AMD’s next-generation silicon architecture opens new frontiers for GPU-accelerated workloads from the data center to the edge.
“Innovation thrives in an open ecosystem,” said JJ Kardwell, CEO of Vultr. “The future of enterprise AI workloads lies in open environments that enable flexibility, scalability and security. AMD accelerators offer our customers an unparalleled price-performance ratio. The balance between high memory and low energy consumption supports sustainability efforts and gives them the opportunity to efficiently drive innovation and growth through AI.”
Building a composable cloud
With AMD ROCm open software and Vultr’s cloud platform, enterprises have access to an environment for AI development and deployment. The open nature of the AMD architecture and Vultr infrastructure gives enterprises access to thousands of open source, pre-trained models and frameworks with a drop-in code experience. This creates an optimized environment for AI development that allows projects to move forward quickly.
“We are proud of our close partnership with Vultr, as the cloud platform is designed to manage high-performance AI training and inferencing tasks and deliver improved overall efficiency,” said Negin Oliver, corporate vice president of business development ,Data Center GPU Business Unit, AMD. “With the adoption of AMD Instinct MI300X accelerators and ROCm open software for these latest deployments, Vultr’s customers will benefit from a truly optimized system that can manage a wide range of AI-intensive workloads.”
The AMD architecture on Vultr infrastructure is designed for next-generation workloads and enables true cloud-native orchestration of all AI resources. AMD Instinct™ accelerators and ROCm software management tools integrate seamlessly with the Vultr Kubernetes Engine for Cloud GPU to create GPU-accelerated Kubernetes clusters that can power the most resource-intensive workloads anywhere in the world. These platform capabilities give developers and innovators the tools to build cutting-edge AI and machine learning solutions for the most complex business challenges.
The partnership has several benefits:
Improved price-performance ratio: Vultr’s high-performance cloud compute, accelerated by AMD GPUs, delivers exceptional processing power for demanding workloads while maintaining cost efficiency.
Scalable Computing Power and Optimized Work Management: Vultr’s scalable cloud infrastructure, combined with AMD’s advanced processing capabilities, allows businesses to effortlessly scale their computing power as demand increases.
Accelerated discovery and innovation in R&D: Vultr’s cloud infrastructure provides the necessary compute power and scalability for developers to leverage AMD Instinct GPUs, AMD ROCm™ open software and the extensive partner ecosystem, helping them solve complex problems and enable faster discovery cycles and innovation to make.
Optimized for AI Inference: Vultr’s platform is optimized for AI inference, with AMD MI300X GPUs delivering faster, scalable, and energy-efficient processing of AI models, enabling lower latency and higher throughput.
Sustainable Computing: Vultr’s eco-friendly cloud infrastructure helps users achieve energy-efficient and sustainable computing in large-scale operations with efficient AI technologies from AMD.
To learn more about Vultr’s composable cloud solutions, visit our website.
View information about the AMD Instinct™ MI300X accelerator here.
Read more