Microsoft Corp. has developed a series of large language models that can rival algorithms from OpenAI and Anthropic PBC, multiple publications reported today.
Sources told Bloomberg that the LLM series is known as MAI. That’s presumably an acronym for “Microsoft artificial intelligence.” It might also be a reference to Maia 100, an internally developed AI chip the company debuted last year. It’s possible Microsoft is using the processor to power the new MAI models.
The company recently tested the LLM series to gauge its performance. As part of the evaluation, Microsoft engineers checked whether MAI could power the company’s Copilot family of AI assistants. Data from the tests reportedly indicates that the LLM series is competitive with models from OpenAI and Anthropic.
That Microsoft evaluated whether MAI could be integrated into Copilot hints the LLM series is geared toward general-purpose processing rather than reasoning. Many of the tasks supported by Copilot can be performed with a general-purpose model. According to Bloomberg, Microsoft is currently developing a second LLM series optimized for reasoning tasks.
The report didn’t specify details such as the number of models Microsoft is training or their parameter counts. It’s also unclear whether they might provide multimodal features.
MAI could help the company reduce its reliance on OpenAI, which provides the LLMs that power Copilot. Microsoft has invested more than $13 billion in the ChatGPT developer and was until recently its exclusive cloud provider. In January, the companies revised the terms of their partnership to let OpenAI move workloads to rival platforms.
If Microsoft moves Copilot beyond OpenAI’s models, it might add support for not just one but several competing LLMs. The company has reportedly tested whether algorithms from Anthropic, Meta Platforms Inc., DeepSeek and xAI Corp. could be used to power Copilot.
“As we’ve said previously, we are using a mix of models, which includes continuing our deep partnership with OpenAI, along with models from Microsoft AI and open source models,” a Microsoft spokesperson told Bloomberg.
MAI wouldn’t be the company’s first entry into the LLM market. It has also developed Phi, a series of open-source language models optimized for power efficiency. The model series is currently in its fourth iteration.
The two newest Phi algorithms, Phi-4-mini and Phi-4-multimodal, debuted in February. The former model features 3.8 billion parameters and lends itself to reasoning tasks such as solving math problems. Phi-4-multimodal, in turn, is an upgraded version of Phi-4-mini that can process multimodal input. Microsoft says that the latter model performs some tasks nearly as well as GPT-4, which includes significantly more parameters.
To build its Phi-4 models, Microsoft developed new LLM training methods that rely on synthetic data. Those methods might prove useful for MAI and the series of reasoning-optimized LLMs the company is reportedly developing.
Photo: Pixabay
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU