The creator of ChatGPT has just lifted the veil on a new project that is more specific than its general AI: GPT-Rosalind. Announced as a boundary reasoning modelit is not a general AI but a tailor-made tool, sharpened for the complexities of biology, genomics and chemistry.
Its name pays homage to the crystallographer Rosalind Franklin whose work was essential to the discovery of the structure of DNA. The choice of this name is anything but trivial: it is a powerful symbolic rehabilitationa nod to the history of science and its too often forgotten figures.
How is GPT-Rosalind different from other AIs like ChatGPT?
The fundamental distinction lies in its specialization. Where ChatGPT is a linguistic Swiss army knife, GPT-Rosalind is a laboratory scalpel. It has been trained and optimized for scientific workflows.
He doesn’t just understand the language, he reasons about complex entities such as molecules, proteins, genes and metabolic pathways.
This domain expertise allows it to connect to tools and databases specific to the world of life sciences. Concretely, a researcher can ask him to synthesize scientific articles on a therapeutic target, to design a molecular cloning protocol or to analyze raw data from experiments.
It acts like a laboratory co-pilotcapable of untangling the complex web of biological data. This is a major change compared to general models, which often stumble over the jargon and multi-step reasoning required in the fundamental research.
How does GPT-Rosalind actually perform against human experts?
OpenAI brought out the heavy artillery to prove the capabilities of its model. On benchmarks (standardized tests) like BixBench and LABBench2, GPT-Rosalind shows leading performance, even outperforming newer models like GPT-5.4 on certain tasks.
But the most striking result comes from a collaboration with Dyno Therapeuticsa company specializing in gene therapies. A blinded evaluation was conducted on an “RNA sequence to function” prediction task, using unpublished data to avoid any bias.
The results are spectacular. GPT-Rosalind’s submissions ranked above the 95th percentile of human experts, i.e. in the top 5%.
This is proof that AI is no longer just an assistance tool, but that it can achieve a superhuman level of performance on cutting-edge scientific problems. This asset is crucial in the field of drug discoverywhere every efficiency gain translates into potentially saved lives.
How can scientists actually use this tool?
The GPT-Rosalind ecosystem is not limited to the model itself. OpenAI simultaneously launched a “ Life Sciences research plugin » pour son interface Codex.
This plugin is a real toolbox that connects AI to more than 50 public scientific databases and tools, such as PubMed for literature, UniProt for proteins or AlphaFold for predicting their 3D structure.
This orchestration layer transforms complex requests into concrete actions. For example, a scientist could ask the interface to “find all studies linking gene X to disease Y, extract 3D structures of the proteins involved, and suggest enzyme sequences for a cloning experiment.”
The AI would then draw from the right sources, assemble the information and generate a structured response. It is a way of streamlining and accelerating a search task which would normally take a human hours, if not days, to complete.
Why is access to GPT-Rosalind so restricted?
With GPT-Rosalind’s power comes overwhelming responsibility. The same capabilities that accelerate drug creation could, in theory, be harnessed to engineer pathogens or toxins.
Aware of this risk of “dual use”OpenAI has opted for an ultra-cautious approach. Access to the model is strictly limited to a trusted program (trusted access program) for qualified companies and institutions in the United States.
Applicants must demonstrate that their research pursues a clear public benefit, that they have strong security and governance controls, and that they are committed to preventing malicious use.
This approach, although frustrating for many researchers, highlights a crucial debate: how to democratize access to such powerful technologies without opening Pandora’s box? OpenAI’s response, for now, is very controlled access.
