Is this the Allen Institute for AI’s breakout moment?
The U.S. National Science Foundation and chip powerhouse Nvidia will provide a total of $152 million in funding and infrastructure to the Seattle-based non-profit research institute to lead a new national project to build open AI models for scientific research.
The five-year initiative positions Ai2, as it’s known, as a key player in building the technological backbone of U.S. artificial intelligence, while expanding its sources of support beyond its historic reliance on grants and philanthropy from the estate of late Microsoft co-founder Paul Allen.
It marks NSF’s first major investment in AI software infrastructure for science.
The initiative “will supercharge the work we do at Ai2 and increase America’s ability to deliver breakthrough AI developments,” Ai2 CEO Ali Farhadi said in a statement.
The project, Open Multimodal AI Infrastructure to Accelerate Science (OMAI), will develop a family of large, fully open, multimodal AI models trained on scientific literature and data. The goal is to accelerate breakthroughs in fields such as materials science, biology, and energy.

AI is becoming “not just a tool for research, but a catalyst for rethinking how discovery happens across every discipline,” said the OMAI project leader, Noah Smith, Ai2 senior director of natural language processing research and a computer science professor at the University of Washington’s Allen School of Computer Science & Engineering.
Consisting of $75 million from the NSF and $77 million from Nvidia, the initiative is part of the White House AI Action Plan to strengthen U.S. leadership in AI-driven research.
“The question isn’t if AI will transform science — it’s who will lead that transformation, and this investment helps ensure that the answer is America,” said Tess deBlanc-Knowles, special assistant for artificial intelligence at NSF, in a briefing with reporters this week.
The project gives Ai2 a rare level of access to key AI hardware: Nvidia will supply its HDX B3-100 systems, built on the company’s new Blackwell Ultra architecture, along with AI Enterprise software to accelerate training and inference for the new models.
Nvidia CEO Jensen Huang said in a statement about the project that the goal is “to generate limitless intelligence, making it America’s most powerful and renewable resource.”
The hardware is designed to handle massive datasets with high efficiency, speeding up the pace of scientific discovery, said Jack Wells, Nvidia’s director of higher education and research, during the briefing with reporters.
Founded in 2014 by Paul Allen, Ai2 has operated largely behind the scenes of the AI boom in recent years, focusing on research and open-source tools while for-profit giants such as OpenAI, Anthropic, Google DeepMind, and Meta have dominated headlines.
Ai2’s work — from natural language models to scientific search engines — has earned respect in research circles, but the new federal and corporate backing stands to elevate its profile.
In leading the OMAI project, Ai2 will build on its experience creating OLMo and Molmo, two families of high-performance open AI models, along with open datasets such as Dolma.
Unlike many leading AI systems that are fully proprietary or only partially open, Ai2’s models are released with the full weights, training data, code, and evaluation tools needed for researchers to inspect, adapt, and retrain them. The idea is to give scientists the transparency they need to recreate results, and trust the technology used in high-stakes research.
Also involved will be research teams from the University of Washington (led by Hanna Hajishirzi, also with Ai2), the University of Hawai’i at Hilo (Travis Mandel), the University of New Hampshire (Samuel Carton), and the University of New Mexico (Sarah Dreier).
Ai2 said the funding will primarily be used for computing power to train larger, more advanced models on the open foundations established with OLMo and Molmo.
The OMAI models will be designed for scientific workflows, helping researchers process and visualize data, generate code for analysis, and identify patterns they might otherwise miss. The systems could also surface relevant findings from outside a researcher’s specialty, linking new insights to past discoveries to speed breakthroughs, according to Ai2.
The project will roll out in stages, with datasets, code, and other resources released along the way. Ai2 expects the first major model to be available about 18 months into the five-year effort.