Announced today at the PyTorch Conference was word that the Ray AI compute engine is becoming a project hosted by the PyTorch Foundation.
The Ray AI compute engine was started by Anyscale for scaling AI workloads from laptops to hundreds of nodes or GPUs in the cloud. Ray focuses on being able to parallelize Python and handle any AI or machine learning workload while jiving not only with PyTorch but other popular software like TensorFlow, HuggingFace, Sci-Kit, XGBoost, and other libraries/integrations.
Ray is joining the PyTorch Foundation alongside PyTorch and vLLM with an aim to deliver a unified open-source AI compute stack.
“Ray addresses the unique computational demands of modern AI by providing a compute framework that executes distributed workloads including:
Multimodal data processing: handles massive, diverse datasets (text, images, audio, video) in parallel, with efficiency.
Pre-training and post-tuning: scales PyTorch and other ML frameworks across thousands of GPUs for both pre-training and post-training tasks.
Distributed inference: serves models in production with high throughput and low latency, orchestrating bursts of dynamic, heterogeneous workloads across clusters.
By contributing Ray to the PyTorch Foundation, Anyscale reinforces its commitment to open governance and long-term sustainability for Ray and open source AI.”
More details on Ray becoming a PyTorch Foundation hosted project via the Linux Foundation press release. Those wanting to learn more about the Ray software itself can do so via the Anyscale.com open-source page.