The startup of French AI Mistral AI has announced the launch of Mistral 3, a family of 10 large and small open source multimodal models designed to run on all types of devices and destinations depending on their size and purpose: from a smartphone or an autonomous drone to large-scale cloud enterprise systems.
The Mistral 3 family includes a new large model, Mistral Large 3, and a suite of several small models, Ministral 3, optimized for edge applications. All models have been released with the Apache 2.0 license, which allows their business use without restrictions, unlike what happens with models from other companies dedicated to AI, such as OpenAI or Google.
With this step, Mistral is also committed to a future in which the development and advancement of AI is not exclusive to companies that leads to the creation of proprietary and closed systems, but rather involves offering companies the greatest possible flexibility to customize and deploy AI adapted to their specific needs. This implies in many cases the use of smaller models that can run and function without the need for external connectivity.
Mistral Large 3, the most prominent model of the family, has its own architecture and 41,000 million active parameters from a pool with 675,000 million parameters. It is capable of processing both text and images, managing content windows with up to 256,000 tokens, and has been trained with a specific emphasis on use with languages other than English, something that also differentiates it from other frontier AI systems.
As for the models in the Ministral 3 range, prepared to work with texts and images, there are nine, compact, and classified in three sizes with, respectively, 14,000, 8,000 and 3,000 million parameters. Each size has three variants customized for different use cases and purposes.
Of these, the base models are used for high-level customization, and the fine-tuned models are used to instruct them for general chat and to perform tasks. As for those that are optimized for reasoning, they are used for complex logic topics that need step-by-step instructions to deliver results.
The smaller Ministral 3 models can also run on devices with only 4 GB of video memory, which means that cutting-edge AI functions can be used on conventional laptops, and even on smartphones and other systems, without the need to access cloud infrastructure. They even work without an Internet connection.
The family Mistral 3 is now available through Mistral AI StudioAmazon Bedrock, Azure Froundry, Hugging Face (Large 3 and Ministral models), Modal, IBM WatsonX, OpenRouter, Fireworks, Unsloth AI and Together AI. It will be ready using NVIDIA NIM and AWS SageMaker.
