We study the utility of the lexical translation model (IBM Model 1) for English text retrieval, in particular, its neural variants that are trained end-to-end. We use the neural Model1 as an aggregator layer applied to context-free or contextualized query/document embeddings. This new approach to design a neural ranking system has benefits for effectiveness, efficiency, and interpretability. Specifically, we show that adding an interpretable neural Model 1 layer on top of BERT-based contextualized embeddings (1) does not decrease accuracy and/or efficiency; and (2) may overcome the limitation on the maximum sequence length of existing BERT models. The context-free neural Model 1 is less effective than a BERT-based ranking model, but it can run efficiently on a CPU (without expensive index-time precomputation or query-time operations on large tensors). Using Model 1 we produced best neural and non-neural runs on the MS MARCO document ranking leaderboard in late 2020.
翻译:我们研究了词汇翻译模型(IBM模型1)对英文文本检索的有用性,特别是其神经变异器经过培训的终端到终端。我们使用神经模型1作为用于不上下文或背景化查询/文件嵌入的聚合层。这种设计神经分级系统的新方法对有效性、效率和可解释性有好处。具体地说,我们表明,在基于BERT背景的嵌入器上添加一个可解释的神经模型1层(1)不会降低准确性和/或效率;以及(2)可能克服对现有BERT模型最大序列长度的限制。无上下文神经模型1比基于BERT的排序模型有效,但可以在CPU上高效运行(在大型加压器上没有昂贵的指数时间前置或询问时间操作)。使用模型1,我们于2020年底在MS MARCO文件排名头板上制作了最佳的神经和非神经运行。