In this paper, we present MS MARCO Web Search, a large-scale dataset for research on web information retrieval. MS MARCO Web Search dataset consists of a high quality set of web pages that mirrors the highly-skewed web document distribution, a query set that reflects the real web query distribution, and a large-scale query-document label set for embedding model training and evaluation.
We use ClueWeb22 [9] as our document set since it is the largest and newest open web document dataset for our purpose. It meets the requirements of large scale, high quality and realistic document distributions crawled and processed by a commercial web search engine with rich information. Compare to Common Crawl [2] which only crawls 35 million registered domains and covers 40+ languages, ClueWeb22 closely mimics the realistic crawl selection of a commercial search engine with 207 languages. It has 10 billion high-quality web pages with rich affiliated information, such as url, language tag, topic tag, title and clean text, etc. Figure 2 (d) gives an example of the data structures provided by ClueWeb22.
To make training cost-effective for both academia and industry, we provide a 100 million and a 10 billion document set. The 100 million document set is a random subset of the 10 billion document set. In order to evaluate model generalization ability in a smallscale dataset, two 100 million non-overlapping document sets are provided, one for training and the other for testing. The whole process is shown in the left part of figure 1.
Authors:
(1) Qi Chen, Microsoft Beijing, China;
(2) Xiubo Geng, Microsoft Beijing, China;
(3) Corby Rosset, Microsoft, Redmond, United States;
(4) Carolyn Buractaon, Microsoft, Redmond, United States;
(5) Jingwen Lu, Microsoft, Redmond, United States;
(6) Tao Shen, University of Technology Sydney, Sydney, Australia and the work was done at Microsoft;
(7) Kun Zhou, Microsoft, Beijing, China;
(8) Chenyan Xiong, Carnegie Mellon University, Pittsburgh, United States and the work was done at Microsoft;
(9) Yeyun Gong, Microsoft, Beijing, China;
(10) Paul Bennett, Spotify, New York, United States and the work was done at Microsoft;
(11) Nick Craswell, Microsoft, Redmond, United States;
(12) Xing Xie, Microsoft, Beijing, China;
(13) Fan Yang, Microsoft, Beijing, China;
(14) Bryan Tower, Microsoft, Redmond, United States;
(15) Nikhil Rao, Microsoft, Mountain View, United States;
(16) Anlei Dong, Microsoft, Mountain View, United States;
(17) Wenqi Jiang, ETH Zürich, Zürich, Switzerland;
(18) Zheng Liu, Microsoft, Beijing, China;
(19) Mingqin Li, Microsoft, Redmond, United States;
(20) Chuanjie Liu, Microsoft, Beijing, China;
(21) Zengzhong Li, Microsoft, Redmond, United States;
(22) Rangan Majumder, Microsoft, Redmond, United States;
(23) Jennifer Neville, Microsoft, Redmond, United States;
(24) Andy Oakley, Microsoft, Redmond, United States;
(25) Knut Magne Risvik, Microsoft, Oslo, Norway;
(26) Harsha Vardhan Simhadri, Microsoft, Bengaluru, India;
(27) Manik Varma, Microsoft, Bengaluru, India;
(28) Yujing Wang, Microsoft, Beijing, China;
(29) Linjun Yang, Microsoft, Redmond, United States;
(30) Mao Yang, Microsoft, Beijing, China;
(31) Ce Zhang, ETH Zürich, Zürich, Switzerland and the work was done at Microsoft.