Dual-encoder-based neural retrieval models achieve appreciable performance and complement traditional lexical retrievers well due to their semantic matching capabilities, which makes them a common choice for hybrid IR systems. However, these models exhibit a performance bottleneck in the online query encoding step, as the corresponding query encoders are usually large and complex Transformer models. In this paper we investigate heterogeneous dual-encoder models, where the two encoders are separate models that do not share parameters or initializations. We empirically show that heterogeneous dual-encoders are susceptible to collapsing representations, causing them to output constant trivial representations when they are fine-tuned using a standard contrastive loss due to a distribution mismatch. We propose DAFT, a simple two-stage fine-tuning approach that aligns the two encoders in order to prevent them from collapsing. We further demonstrate how DAFT can be used to train efficient heterogeneous dual-encoder models using lightweight query encoders.
翻译:以双编码器为基础的神经检索模型取得显著的性能,并由于它们的语义匹配能力而很好地补充了传统的词汇检索器,这使得它们成为混合IR系统的共同选择。然而,这些模型在在线查询编码步骤中表现出一种性能瓶颈,因为相应的查询编码器通常是大而复杂的变异模型。在本文中,我们调查了多种的双编码器模型,其中两个编码器是不同的模型,不共享参数或初始化。我们从经验上表明,不同的双编码器很容易出现崩溃,在使用分布不匹配造成的标准对比损失进行微调时,它们就会产生经常的微小的表示。我们建议DAFT,这是简单的两阶段微调方法,对两个编码器进行对齐,以防止它们崩溃。我们进一步展示DAFT如何利用轻量的查询编码器来培训高效的多元双编码模型。