Discord has detailed how it rebuilt its machine learning platform after hitting the limits of single-GPU training. By standardising on Ray and Kubernetes, introducing a one-command cluster CLI, and automating workflows through Dagster and KubeRay, the company turned distributed training into a routine operation. The changes enabled daily retrains for large models and contributed to a 200% uplift in a key ads ranking metric.
Similar engineering reports are emerging from companies such as Uber, Pinterest, and Spotify as bespoke models grow in size and frequency. Each describes the move from manual, team-owned GPU setups to shared, programmable platforms that prioritise consistency, observability, and faster iteration.
Discord’s shift began when teams independently created Ray clusters by copying open-source examples and modifying YAML files for each workload. This led to configuration drift, unclear ownership, and inconsistent GPU usage. The platform team set out to make distributed ML predictable by standardising how clusters were created and managed.
The resulting platform centers on Ray and Kubernetes but is defined by the abstractions on top. Through a CLI, engineers request clusters by specifying high-level parameters, and the tool generates the Kubernetes resources needed to run Ray using vetted templates. This removed the need for teams to understand low-level cluster configuration and ensured that scheduling, security, and resource policies were applied consistently.
Training workflows were consolidated in Dagster, which interacts with KubeRay to create and destroy Ray clusters as part of a pipeline. Jobs that previously required manual setup now run through a single orchestration layer, with cluster lifecycle automated by the system.
Discord also built X-Ray, a UI showing active clusters, job logs, and resource usage. These components turned distributed training into a predictable workflow rather than a bespoke setup exercise.

According to the company, the platform enabled daily training for large models and made it easier for engineers to adopt distributed techniques. Teams were able to implement new training frameworks without requiring direct involvement from the platform group. The improvements translated to measurable product gains, including better ads relevance.
Other organizations have described similar transitions. Uber migrated parts of its Michelangelo platform to Ray on Kubernetes and reported higher throughput and GPU utilisation. Pinterest built a Ray control plane that manages cluster lifecycle and centralises logs and metrics, reducing operational friction for ML engineers. Spotify created “Spotify-Ray”, a GKE-based environment where users launch Ray clusters through a CLI or SDK to unify experimentation and production workflows.
However, there are also published cautionary accounts about the limits of internal ML platforms. In a post from earlier this year, CloudKitchens described how its first Kubernetes-based system became overly complex, with simple ML jobs taking more than ten minutes to start and Bazel-based workflows creating ongoing maintenance friction for Python users.
Taken together, these accounts suggest a shift toward shared ML platforms with clearer abstractions and more predictable workflows. Platforms can unlock faster iteration and more reliable access to distributed compute, but can also introduce design and maintenance trade-offs of their own.
