In this paper, we present DuReader_retrieval, a large-scale Chinese dataset for passage retrieval. DuReader_retrieval contains more than 90K queries and over 8M unique passages from a commercial search engine. To alleviate the shortcomings of other datasets and ensure the quality of our benchmark, we (1) reduce the false negatives in development and test sets by manually annotating results pooled from multiple retrievers, and (2) remove the training queries that are semantically similar to the development and testing queries. Additionally, we provide two out-of-domain testing sets for cross-domain evaluation, as well as a set of human translated queries for for cross-lingual retrieval evaluation. The experiments demonstrate that DuReader_retrieval is challenging and a number of problems remain unsolved, such as the salient phrase mismatch and the syntactic mismatch between queries and paragraphs. These experiments also show that dense retrievers do not generalize well across domains, and cross-lingual retrieval is essentially challenging. DuReader_retrieval is publicly available at https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval.
翻译:在本文中, 我们介绍用于通过检索的大型中国数据集 DuReader_ retrieval 。 DuReader_retrieval 包含来自商业搜索引擎的超过90K查询和超过8M的独特段落。 为了减轻其他数据集的缺陷并确保我们基准的质量, 我们(1) 通过多个检索器的人工批注结果, 减少开发和测试组中的虚假负值, 并(2) 取消与开发和测试查询相类似的培训查询。 此外, 我们提供两套用于跨域评价的外部测试套件, 以及一套用于跨域检索评价的经翻译的人类查询。 实验表明, DuReader_ retrerereval 具有挑战性, 还有一些问题仍未解决, 例如突出的短语不匹配以及查询和段落之间的同步性不匹配。 这些实验还表明, 密度的检索器没有在跨域间进行普及, 跨语言检索基本上具有挑战性。 DuReader_ retrevalval 。