DeepSeek announced the release and open-source launch of its latest AI model, DeepSeek-V3, via a WeChat post on Tuesday. Users can now interact with the V3 model on DeepSeek’s official website. According to the post, DeepSeek-V3 boasts 671 billion parameters, with 37 billion activated, and was pre-trained on 14.8 trillion tokens. Compared to the V2.5 version, the new model’s generation speed has tripled, with a throughput of 60 tokens per second. Although it currently lacks multi-modal input and output support, DeepSeek-V3 excels in multilingual processing, particularly in algorithmic code and mathematics. In multiple benchmark tests, DeepSeek-V3 outperformed open-source models such as Qwen2.5-72B and Llama-3.1-405B, matching the performance of top proprietary models such as GPT-4o and Claude-3.5-Sonnet. [DeepSeek official WeChat account, in Chinese]
Related