Apple Machine Learning Research published a paper titled The Illusion of Thinking, which investigates the abilities of Large Reasoning Models (LRMs) on a set of puzzles. As the complexity of the puzzles increases, the researchers found that LRMs encounter a “collapse” threshold where the models reduce their reasoning effort, indicating a limit to the models’ scalability.
For their experiments, Apple researchers chose four puzzle problems, including Tower of Hanoi, and a variety of LRMs and standard LLMs, including o3-mini and DeepSeek-R1. Each puzzle’s complexity could be varied; for example, the Tower of Hanoi puzzle can have a variable number of disks. They found that as complexity increased, model behavior went through three regimes: in the first, with simple problems, both reasoning and non-reasoning models performed similarly well. In the second, medium complexity regime, the reasoning models with their Chain-of-Thought (CoT) inference performed better than LLMs. But in the high complexity regime, both groups’ performance “collapsed to zero.” According to Apple,
In this study, we probe the reasoning mechanisms of frontier LRMs through the lens of problem complexity….Our findings reveal fundamental limitations in current models: despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds….These insights challenge prevailing assumptions about LRM capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoning.
LRMs such as o3 and DeepSeek-R1 are LLMs that have been fine-tuned to generate step-by-step instructions for itself before producing a response to users; in essence, the models “think out loud” to produce better answers. This allows the models to outperform their “standard” LLM counterparts on many tasks, especially coding, mathematics, and science benchmarks.
As part of their experiments, the Apple team analyzed these reasoning traces generated by the models. They noted that for simpler problems, the models would often “overthink:” the correct solution would appear early in the trace, but the models would continue to explore incorrect ideas. In medium complexity problems, however, the models would explore incorrect solutions before finding the correct one.
Apple’s paper sparked a wide debate in the AI community. Gary Marcus, a cognitive scientist and critic of the current state of AI, wrote about the research, saying:
What the Apple paper shows, most fundamentally, regardless of how you define [Artificial General Intelligence (AGI)], is that LLMs are no substitute for good well-specified conventional algorithms. (They also can’t play chess as well as conventional algorithms, can’t fold proteins like special-purpose neurosymbolic hybrids, can’t run databases as well as conventional databases, etc.)
Open source developer and AI commentator Simon Willison pointed out:
I’m not interested in whether or not LLMs are the “road to AGI”. I continue to care only about whether they have useful applications today, once you’ve understood their limitations. Reasoning LLMs are a relatively new and interesting twist on the genre. They are demonstrably able to solve a whole bunch of problems that previous LLMs were unable to handle, hence why we’ve seen a rush of new models from OpenAI and Anthropic and Gemini and DeepSeek and Qwen and Mistral….They’re already useful to me today, whether or not they can reliably solve the Tower of Hanoi….
Apple acknowledges several limitations of their research, noting in particular that their experiments mostly relied on “black box” API calls, leaving them unable to examine the inner state of the models. They also agree that the use of puzzles means that their conclusions may not generalize to all reasoning domains.