Recently, pre-trained contextual models, such as BERT, have shown to perform well in language related tasks. We revisit the design decisions that govern the applicability of these models for the passage re-ranking task in open-domain question answering. We find that common approaches in the literature rely on fine-tuning a pre-trained BERT model and using a single, global representation of the input, discarding useful fine-grained relevance signals in token- or sentence-level representations. We argue that these discarded tokens hold useful information that can be leveraged. In this paper, we explicitly model the sentence-level representations by using Dynamic Memory Networks (DMNs) and conduct empirical evaluation to show improvements in passage re-ranking over fine-tuned vanilla BERT models by memory-enhanced explicit sentence modelling on a diverse set of open-domain QA datasets. We further show that freezing the BERT model and only training the DMN layer still comes close to the original performance, while improving training efficiency drastically. This indicates that the usual fine-tuning step mostly helps to aggregate the inherent information in a single output token, as opposed to adapting the whole model to the new task, and only achieves rather small gains.
翻译:最近,诸如BERT等经过事先培训的背景模型在语言相关任务方面表现良好。我们重新审视了指导这些模型在开放式问题解答中重新排位任务的适用性的设计决定。我们发现,文献中的通用方法依赖于对经过预先培训的BERT模型进行微调,并使用单一的全球输入表示,抛弃了在象征性或判决层面表示中有用的细微区分的相关信号。我们争辩说,这些被抛弃的标牌具有有用的信息,可以加以利用。在本文中,我们通过使用动态内存网络(DMNs)来明确模拟句级表现,并进行实证评估,以显示在经过精细调整的香草BERT模型的重新排位上,通过记忆强化一套关于多种开放式QA数据集的明确句式模型来显示改进。我们进一步表明,冻结BERT模型和仅对DMN层的培训仍然接近最初的性能,同时大幅度提高培训效率。这表明,通常的微调步骤主要有助于将内在信息集中在一个单一的产出符号中,而不是将整个模型转化为新的成果。