OpenAI and Microsoft Corp. today introduced two artificial intelligence models optimized to generate speech.
OpenAI’s new algorithm, gpt-realtime, is described as its most capable voice model. The AI produces more natural sounding speech than the ChatGPT developer’s earlier entries into the category. It’s also capable of changing its tone and language mid-sentence.
According to OpenAI, gpt-realtime is particularly adapt at following instructions. That allows developers who use the model in applications to customize it for specific tasks. For example, a software team building a technical support assistant could instruct gpt-realtime to cite knowledge base articles in certain prompt responses.
Developers applying the model to technical support use cases also have access to a new image upload tool. Using the feature, a customer service chatbot could enable users to upload screenshots of a malfunctioning application they wish to troubleshoot. OpenAI also sees customers harnessing the capability for a range of other tasks.
Developers can access gpt-realtime through the OpenAI Realtime API. It’s an application programming interface that allows customers to interact with the ChatGPT developer’s voice and multimodal models. As part of today’s product update, OpenAI moved the API into general availability with a number of new features.
“You can now save and reuse prompts — consisting of developer messages, tools, variables, and example user/assistant messages — across Realtime API sessions,” OpenAI researchers detailed in a blog post.
The voice AI model that Microsoft detailed in conjunction with the launch of gpt-realtime is called MAI-Voice-1. It’s initially available in the company’s Microsoft Copilot assistant. According to the company, the model powers features that enable the assistant to summarize updates such as weather forecasts and generate podcasts from text.
Microsoft says MAI-Voice-1 is one of the industry’s most hardware-efficient voice models. It can generate one minute of audio in under a second using a single graphics processing unit. Microsoft didn’t provide additional information, such as what GPU what used to measure the model’s single-chip performance.
The company shared more details about MAI-1-preview, a second new AI model that debuted today. It trained the algorithm using 15,000 of Nvidia Corp.’s H100 accelerators. The H100 was the chipmaker’s flagship data center graphics card when it launched in 2022.
Like Microsoft’s new voice model, MAI-1-preview is optimized for efficiency. Neural networks usually activate all their parameters, or configuration settings, when processing a prompt. MAI-1-preview has a mixture-of-experts architecture that allows it to activate only a subset of its parameters, which significantly reduces hardware usage.
On launch, MAI-1-preview is available to a limited number of testers through an API. It will roll out to Microsoft Copilot in the coming weeks.
The company hinted that it plans to introduce an improved version of MAI-1-preview in the coming months. The upcoming model will be trained using a cluster of GB200 appliances. Each system combines 72 Blackwell B200 chips, Nvidia’s latest and most advanced data center GPUs, with 36 central processing units.
“Not only will we pursue further advances here, but we believe that orchestrating a range of specialized models serving different user intents and use cases will unlock immense value,” researchers from Microsoft’s AI group wrote in a blog post.
Image: OpenAI
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.