DeepSeek has introduced DeepSeek-V3.2 and DeepSeek-V3.2-Speciale. They are AI models that combine complex reasoning with the ability to use tools autonomously.
Why is it important. The Hangzhou company claims that DeepSeek-V3.2 matches the performance of GPT-5 in multiple reasoning tests. The Speciale model reaches the level of Gemini-3 Pro and has won gold medals in international mathematics and computer science Olympiads.
The context. DeepSeek surprised the world in January with a revolutionary model for efficiency and cost. Now it ups the ante with open source systems that throw down the gauntlet directly to OpenAI and Google in reasoning capabilities.
Technical innovation. DeepSeek-V3.2 integrates “thinking” directly into tool usage for the first time. You can reason internally while running web searches, operating a calculator, or writing code.
The system works in two modes:
- With visible reasoning (similar to the reasoning seen in ChatGPT and company).
- Or without any reasoning.
The chain of thought persists between tool calls and is restarted only when the user sends a new message.
How they have achieved it. The researchers have developed ‘DeepSeek Sparse Attention (DSA)’, an architecture that greatly reduces the computational cost of processing long contexts.
The model maintains 671 billion total parameters but activates only 37 billion per token.
In figures. DSA cuts the cost of inference in long contexts by approximately 50% compared to the previous dense architecture. The system processes 128,000 context windows tokens in production.
Reinforcement training has consumed more than 10% of the total pretraining count. The team has generated more than 1,800 synthetic environments and 85,000 tasks to train agent capabilities.
The results. DeepSeek-V3.2-Speciale has won gold medal in International Mathematical Olympiad 2025, International Informatics Olympiad 2025, ICPC World Finals 2025 and Chinese Mathematical Olympiad 2025.
Both models are available now.
- V3.2 works on app, web and API.
- V3.2-Speciale only by API, at least for now.
Between the lines. DeepSeek has published the full weights and technical report of the training process. This transparency contrasts with what large American technology companies usually do. Even those that offer open source models like Llama, with an asterisk.
The Chinese startup wants to demonstrate that open source systems can compete with the most advanced proprietary models. And it does so while continuing to reduce costs.
Yes, but. Los benchmarks Public settings do not always reflect performance on real-world tasks. Direct comparisons to GPT-5 or Gemini-3 Pro depend on specific metrics that may not capture all relevant dimensions.
Furthermore, the integration of tools in reasoner mode still needs to be tested in complex real-world use cases. The reduced cost is not as important if the quality of the responses does not hold up.
In WorldOfSoftware | DeepSeek Guide: 36 Features and Things You Can Do for Free with This AI
Featured image | Solen Feyissa
