The floodgates have opened for building AI reasoning models on the cheap.
Researchers at Stanford and the University of Washington have developed a model that performs comparably to OpenAI o1 and DeepSeek R1 models in math and coding — for less than $50 of cloud compute credits.
What’s more, the model was trained on only 1,000 questions, and took just 26 minutes and 16 Nvidia H100 GPUs. Stanford researcher Niklas Muennighoff said in a email to Mashable that the cost is an estimate based on the GPU runtime and number of H100 GPUs used.
Meet Alibaba’s Qwen 2.5, an AI model claiming to beat both DeepSeek and OpenAI’s ChatGPT
The AI industry of late is all about how new approaches to the pre and post training process can massively save computing costs, as evidenced by DeepSeek’s disruptive impact. On top of that, developers are now able to build on top of existing AI models at little or no cost, through APIs, open-source access, and even closed-source models by distilling their data, bringing the costs down even more.
Mashable Light Speed
According to the team’s research paper which was published last Friday, s1 was trained on a dataset consisting of “1,000 carefully curated questions paired with reasoning traces and answers distilled from Gemini Thinking Experimental.” Google’s Gemini Thinking Experimental model is accessible with daily limits through AI Studio. While it’s a closed-source model, that clearly hasn’t stopped researchers from making use of its responses.
OpenAI launches ‘deep research’ AI agent for ChatGPT
Next, the researchers used an “off the shelf” pretrained model from Alibaba-owned lab, Qwen, and performed supervised fine-tuning of its curated dataset. Then, the team created a token budget to control the amount of compute time for testing the model. If s1 went over budget on thinking tokens, it was cut off and forced to generate whatever answer it came up with. If the researchers wanted the model to spend more “test-time compute” on a problem, they would simply tell the model to “wait,” which extended its thinking time and led to more accurate results.
By controlling the amount of time and compute spent on a problem, the researchers were able to show how increased thinking team leads to improved performance.
S1 is one example of open-source reasoning models that have been developed for a fraction of the cost of flagship models from Google and OpenAI. In January, UC Berkeley researchers released an open-source reasoning model called Sky-T1 that cost $450, “demonstrating that it is possible to replicate high-level reasoning capabilities affordably and efficiently,” per its blog post. There’s also the open-source rStar-Math reasoning model from Microsoft Asia researchers, Tulu 3 from non profit research institute Ai2, and HuggingFace has its own initiative to replicate DeepSeek’s R1.
As high-quality models become more accessible and cheaper, we’re starting to see a power shift from the few AI heavy hitters, to the many.
Topics
Artificial Intelligence
OpenAI