OpenAI has released BrowseComp, a new benchmark designed to test AI agents’ ability to locate difficult-to-find information on the web. The benchmark contains 1,266 challenging problems that require agents to persistently navigate through multiple websites to retrieve entangled information.
Unlike existing benchmarks such as SimpleQA that focus on basic fact retrieval and are already “saturated by models with access to fast browsing tools, such as GPT-4o with browsing,” BrowseComp challenges agents to sift through tens or even hundreds of websites to find answers. The benchmark questions have short, unambiguous answers that can be easily verified against reference solutions.
OpenAI positions BrowseComp as “analogous to how programming competitions are an incomplete but useful benchmark for coding agents.” While it doesn’t address all aspects of real-world user queries, it measures the “important core capability of exercising persistence and creativity in finding information” that will be essential for next-generation AI assistants.
While humans struggle with web navigation due to “limited memory and world knowledge,” vulnerability to “distraction and fatigue,” and inability to multitask, machine intelligence theoretically offers advantages through superior recall and tireless operation. However, current AI systems fall short of their potential. Despite recent advances in large language models, AI agents still “underperform when tasked with locating nuanced, context-dependent facts across multiple sources.” Traditional benchmarks primarily measure recall of easily accessible information rather than evaluating the complex browsing capabilities needed for practical applications such as research assistance, policy summarization, or fact-checking tasks that demand persistence and adaptive search strategies.
The BrowseComp dataset was created entirely by human trainers who developed fact-seeking questions with “single, indisputable, short answers that would not change over time.” To ensure questions met the benchmark’s standard of difficulty, trainers verified that leading models including GPT-4o (with and without browsing), OpenAI o1, and an early version of their deep research model could not solve them. Additionally, trainers confirmed answers weren’t discoverable within the first page of five different Google searches, and aimed to create problems that would take most people more than ten minutes to solve. The benchmark uses an “inverted question” approach where trainers started with facts and then constructed questions that made those facts “hard to find but easy to verify,” typically by combining multiple characteristics with large search spaces.
OpenAI evaluated several of its models on the BrowseComp benchmark, including non-browsing models like GPT-4o, GPT-4.5, and OpenAI o1, as well as web-enabled systems like GPT-4o with browsing and their Deep Research model. The results reveal that Deep Research “significantly outperforms all other models, solving around half of the problems.” This agent model demonstrates capabilities in “autonomously searching the web, evaluating and synthesizing information from multiple sources, and adapting its search strategy” critical skills for tackling BrowseComp’s intentionally difficult questions.
Source: Accuracy and calibration of OpenAI models on BrowseComp
The release of BrowseComp has sparked discussion about the future of web search and AI-assisted research.
Michael Buckbee, Founder of Knowatoa, expressed both optimism and concern about these developments.
While I’m positive about the impact of AI on search, if there’s one innovation that threatens the search market as we know it, it’s ‘Deep Research’ agents,
Buckbee said.
We’re hurtling towards a future where people don’t see search results at all but just ‘reports’ of search results. The new AI modes, deep research tools, and interfaces all clearly depict what this looks like.
Nishant Sinha, AI Advisor and Builder, highlighted the significance of BrowseComp’s difficulty level:
Browser use agents have grown in their accuracy for locating UI elements on a web page and even executing a series of natural language instructions. But this benchmark stress tests them! Not just find a piece of information easily accessible but something that is ‘hidden’ behind several doors.
Developers and researchers interested in exploring BrowseComp can access the benchmark through its GitHub repository. For a deeper understanding of the methodology and findings, read the full research paper. Also, readers are encouraged to read our recent coverage of OpenAI’s Deep Research model.