The success of contextual word representations and advances in neural information retrieval have made dense vector-based retrieval a standard approach for passage and document ranking. While effective and efficient, dual-encoders are brittle to variations in query distributions and noisy queries. Data augmentation can make models more robust but introduces overhead to training set generation and requires retraining and index regeneration. We present Contrastive Alignment POst Training (CAPOT), a highly efficient finetuning method that improves model robustness without requiring index regeneration, the training set optimization, or alteration. CAPOT enables robust retrieval by freezing the document encoder while the query encoder learns to align noisy queries with their unaltered root. We evaluate CAPOT noisy variants of MSMARCO, Natural Questions, and Trivia QA passage retrieval, finding CAPOT has a similar impact as data augmentation with none of its overhead.
翻译:随着上下文词表示法的成功和神经信息检索的进展,基于密集向量的检索已成为段落和文档排序的标准方法。虽然这种方法高效且有效,但它对查询分布和嘈杂查询的变化很脆弱。数据增强可以使模型更加健壮,但会对训练集生成产生额外开销,并需要重新训练和索引生成。我们提出了对比对齐后训练(CAPOT)这种高度有效的微调方法,它可以在不需要索引生成、训练集优化或更改的情况下提高模型的健壮性。CAPOT通过冻结文档编码器,使得查询编码器可以学习将嘈杂的查询与其未更改的根对齐,从而实现健壮检索。我们评估了CAPOT在MSMARCO、自然问题和琐事QA段落检索的嘈杂变体中的效果,并发现CAPOT与数据增强具有相似的影响,但完全没有数据增强的开销。