We carry out a comprehensive evaluation of 13 recent models for ranking of long documents using two popular collections (MS MARCO documents and Robust04). Our model zoo includes two specialized Transformer models (such as Longformer) that can process long documents without the need to split them. Along the way, we document several difficulties regarding training and comparing such models. Somewhat surprisingly, we find the simple FirstP baseline (truncating documents to satisfy the input-sequence constraint of a typical Transformer model) to be quite effective. We analyze the distribution of relevant passages (inside documents) to explain this phenomenon. We further argue that, despite their widespread use, Robust04 and MS MARCO documents are not particularly useful for benchmarking of long-document models.
翻译:我们利用两种流行文献集(MS MARCO文件和Robust04)对13个近期长文件排名模型(MS MARCO文件和Robust04)进行了全面评估。我们的动物园模型包括两个专门化的变异器模型(如Longexu),这些模型可以处理长文件而不必将其分开。与此同时,我们记录了在培训和比较这些模型方面的一些困难。有些令人惊讶的是,我们发现“第一项目”的简单基准(满足典型变异器模型输入序列限制的文件)相当有效。我们分析了解释这一现象的相关段落(侧面文件)的分布情况。我们还认为,尽管Robust04和MS MARCO文件广泛使用,但对于长文件模型的基准并不特别有用。