Apple isn’t interested in letting rivals like Nvidia get a leg up in the AI race. To stay competitive, it’s enabled its existing Thunderbolt 5-supporting Macs to connect together as more advanced “AI clusters” for tandem AI model processing, similar to Nvidia’s recently released DGX products.
Apple’s Answer to Nvidia DGX? The Mac, Of Course
This isn’t uncharted territory for Apple, but it’s a first using Thunderbolt 5. The capability will arrive with macOS 26.2, which is currently in beta, and use Apple’s open-source AI array framework known as MLX. It’s essentially an application programming interface (API) that lets developers create or test new AI models and iterate on them with new features and capabilities.
Apple didn’t make this happen alone; it collaborated with developer Exo Labs to create the tandem AI processing capability using the MLX API. The tool is known as EXO 1.0, and it can facilitate up to four Thunderbolt 5 Mac Studio desktops or two MacBook Pro laptops working on the same AI models, which can be much larger than they could handle alone: up to 1 trillion parameters. The Thunderbolt 5 connections enable the systems to operate as one, pooling their unified memory into a single resource for the AI models to tap.
(Credit: Brian Westover/PCMag)
In a recent web demonstration, members of Apple’s product team showed us four M3 Ultra-equipped Mac Studio desktops pooling their resources together to run a 1-trillion parameter model known as Kimi-K2-Thinking, consuming less than 500 watts (W) of power between them, which is far less than what a single traditional GPU draws in an AI cluster: up to 700W.
For those keeping score, Nvidia’s DGX Spark boxes are rated to draw up to 240W under maximum load, but high-profile developers like John Carmack are crying foul, suspecting reduced performance ahead of release. Connecting the same number of DGX Spark systems as Apple’s Mac Studio demo could theoretically draw up to 960W, but that’s highly unlikely. Regardless, Apple’s solution might have an advantage here, especially for developers interested in running multiple clusters. As for throughput, it’s too early for those kinds of conclusions.
Apple Grants Its First M5 Chip Access to MLX
However, Apple was happy to shout from the digital rooftops about its M5 chip’s AI chops in a recent blog post. MacOS 26.2 enables developers to access the M5’s new Neural Accelerators through MLX, while also improving memory efficiency in AI workloads. That makes the time-to-first-token (TTFT) metric—how quickly a model generates its first piece of information after a prompt—more important, as it’s compute-bound, and Apple’s M5 has compute power in spades.
The most significant upgrade to Apple’s processors in the M5 generation is the neural accelerator in each GPU core, which substantially boosts AI performance, as we found in our review of the M5 MacBook Pro 14.
Get Our Best Stories!
Love All Things Apple?
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
(Credit: Brian Westover/PCMag)
“The M5 chip provides dedicated matrix-multiplication operations, which are critical for many machine learning workloads,” Apple’s blog post reads. “MLX leverages the Tensor Operations (TensorOps) and Metal Performance Primitives framework introduced with Metal 4 to support the Neural Accelerators’ features.”
These advances in M5 AI compute power mean that the TTFT by most large language models (LLMs) is drastically reduced. In evaluating its M5 hardware on the Qwen model, an LLM developed by Alibaba Cloud, Apple found that M5’s TTFT rating (in seconds) was up to four times faster than that of its M4 counterparts.
Recommended by Our Editors
(Credit: Apple)
As the post explains, the first token demands high compute power, while the inference workload performance for all subsequent tokens is more memory-dependent. That means the 4x performance gains cannot be carried over to the entire prompt, but Apple still saw an overall LLM performance improvement between 19% and 27% across various models with M5, compared with its M4 hardware, thanks to its greater memory bandwidth.
The article highlights that these performance improvements extend to both image generation and text. When generating a 1,024-by-1,024 image with FLUX-dev-4bit (12B parameters) using MLX, Apple found that the M5 hardware performed up to 3.8 times faster than M4 alternatives.
This is exciting news for anyone taking advantage of Apple Intelligence features on macOS and for those who want to develop AI on a Mac. However, if you really want to accelerate those kinds of workloads on a MacBook, using an external RTX graphics card may yield the best results. That is, at least until we see how two M5 MacBook Pro laptops can perform together.
About Our Experts
Joe Osborne
Deputy Managing Editor, Hardware
Experience
After starting my career at PCMag as an intern more than a decade ago, I’m back as one of its editors, focused on managing laptops, desktops, and components coverage. With 15 years of experience, I have been on staff and published in technology review publications, including PCMag (of course!), Laptop Magazine, Tom’s Guide, TechRadar, and IGN. Along the way, I’ve tested and reviewed hundreds of laptops and helped develop testing protocols. I have expertise in testing all forms of laptops and desktops using the latest tools. I’m also well-versed in video game hardware and software coverage.
Read Full Bio
Jon Martindale
Contributor
Experience
Jon Martindale is a tech journalist from the UK, with 20 years of experience covering all manner of PC components and associated gadgets. He’s written for a range of publications, including ExtremeTech, Digital Trends, Forbes, U.S. News & World Report, and Lifewire, among others. When not writing, he’s a big board gamer and reader, with a particular habit of speed-reading through long manga sagas.
Jon covers the latest PC components, as well as how-to guides on everything from how to take a screenshot to how to set up your cryptocurrency wallet. He particularly enjoys the battles between the top tech giants in CPUs and GPUs, and tries his best not to take sides.
Jon’s gaming PC is built around the iconic 7950X3D CPU, with a 7900XTX backing it up. That’s all the power he needs to play lightweight indie and casual games, as well as more demanding sim titles like Kerbal Space Program. He uses a pair of Jabra Active 8 earbuds and a SteelSeries Arctis Pro wireless headset, and types all day on a Logitech G915 mechanical keyboard.
Read Full Bio
