In traditional information retrieval, user queries and documents are represented as a list of keywords, and the retrieval is done based on keyword matching. However, simple keyword matching faces many challenges. First, it cannot clearly understand users’ intents. In particular, it cannot estimate users’ positive and negative sentiment and may return opposite results by mistake. Second, it cannot combine synonymous expressions, reducing the diversity of results [18]. Third, it cannot handle spelling errors and will return irrelevant results. Therefore, query alteration is employed to address the above challenges. Unfortunately, it is difficult to cover all kinds of query alterations, especially those newly-appeared alterations.
With the great success of deep learning in natural language processing, both queries and documents can be more meaningfully represented as semantic embedding vectors. Since embedding-based retrieval solves the above three challenges, it has been widely used in modern information systems to facilitate new state-of-the-art retrieval quality and performance. Numerous prior studies have concentrated on deep embedding models, from DSSM [21], CDSSM [46], LSTM-RNN [38], and ARC-I [20] to transformer-based embedding models [10, 16, 39, 40, 45, 53, 54]. They have shown impressive gains with brute-force nearest neighbor embedding search on some small datasets as compared with traditional keyword matching.
Due to the extremely high computational cost and query latency of brute-force vector search, there are many research approaches focusing on large-scale approximate vector nearest neighbor search (ANN) algorithms and systems design [5–7, 11, 19, 24–26, 26, 41, 48]. They can be divided into partition-based and graph-based solutions. Partition-based solutions, such as SPANN [11], divide the whole vector space into a large number of clusters and only do fine-grained search on a small number of the closest clusters to a query in online search. Graph-based solutions, such as DiskANN [48], construct a neighbor graph for the whole dataset and do the best-first traversal from some fixed starting points when a query comes in. Both of these approaches work well on some uniform-distributed datasets.
Unfortunately, when applying embedding-based retrieval in the web scenario, several new challenges emerge. First, web scale data volumes require large models, high embedding dimensions, and a large-scale labeled training dataset to guarantee sufficient knowledge coverage. Second, performance gains of state-of-the-art embedding models verified on small datasets cannot be directly transferred to a web scale dataset (see section 4.4). Third, embedding models need to co-work with ANN systems in order to serve large scale data volumes efficiently. However, different training data distributions may affect the accuracy and system performance of an ANN algorithm, which will greatly reduce the result accuracy as compared to embedding models with brute-force search. Distill-VQ [52] has verified that CoCondenser [17] embedding model with Faiss-IVFPQ ANN index achieves different result accuracy on MSMarco [35] and NQ [28] datasets. Moreover, even the same training data distribution will also result in different embedding vector distributions, which will lead to different ranking trends of the embedding models in brute-force search (KNN) and approximate nearest neighbor search (ANN) (see section 4.6).
Authors:
(1) Qi Chen, Microsoft Beijing, China;
(2) Xiubo Geng, Microsoft Beijing, China;
(3) Corby Rosset, Microsoft, Redmond, United States;
(4) Carolyn Buractaon, Microsoft, Redmond, United States;
(5) Jingwen Lu, Microsoft, Redmond, United States;
(6) Tao Shen, University of Technology Sydney, Sydney, Australia and the work was done at Microsoft;
(7) Kun Zhou, Microsoft, Beijing, China;
(8) Chenyan Xiong, Carnegie Mellon University, Pittsburgh, United States and the work was done at Microsoft;
(9) Yeyun Gong, Microsoft, Beijing, China;
(10) Paul Bennett, Spotify, New York, United States and the work was done at Microsoft;
(11) Nick Craswell, Microsoft, Redmond, United States;
(12) Xing Xie, Microsoft, Beijing, China;
(13) Fan Yang, Microsoft, Beijing, China;
(14) Bryan Tower, Microsoft, Redmond, United States;
(15) Nikhil Rao, Microsoft, Mountain View, United States;
(16) Anlei Dong, Microsoft, Mountain View, United States;
(17) Wenqi Jiang, ETH Zürich, Zürich, Switzerland;
(18) Zheng Liu, Microsoft, Beijing, China;
(19) Mingqin Li, Microsoft, Redmond, United States;
(20) Chuanjie Liu, Microsoft, Beijing, China;
(21) Zengzhong Li, Microsoft, Redmond, United States;
(22) Rangan Majumder, Microsoft, Redmond, United States;
(23) Jennifer Neville, Microsoft, Redmond, United States;
(24) Andy Oakley, Microsoft, Redmond, United States;
(25) Knut Magne Risvik, Microsoft, Oslo, Norway;
(26) Harsha Vardhan Simhadri, Microsoft, Bengaluru, India;
(27) Manik Varma, Microsoft, Bengaluru, India;
(28) Yujing Wang, Microsoft, Beijing, China;
(29) Linjun Yang, Microsoft, Redmond, United States;
(30) Mao Yang, Microsoft, Beijing, China;
(31) Ce Zhang, ETH Zürich, Zürich, Switzerland and the work was done at Microsoft.