Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by fine-tuning transformer-based models as individual sentence-pair classifiers. Recent studies show that these tasks benefit from modeling dependencies across multiple candidate sentences jointly. In this paper, we first show that popular pre-trained transformers perform poorly when used for fine-tuning on multi-candidate inference tasks. We then propose a new pre-training objective that models the paragraph-level semantics across multiple input sentences. Our evaluation on three AS2 and one fact verification datasets demonstrates the superiority of our pre-training technique over the traditional ones for transformers used as joint models for multi-candidate inference tasks, as well as when used as cross-encoders for sentence-pair formulations of these tasks. Our code and pre-trained models are released at https://github.com/amazon-research/wqa-multi-sentence-inference .
翻译:最近的研究表明,这些任务受益于在多个候选判决中共同建模依赖性。在本文中,我们首先表明,在用于对多种身份推断任务进行微调时,受过训练的普通变压器表现不佳。我们然后提出一个新的培训前目标,即对多个输入句进行段落级语义学模型。我们对三个AS2和一个事实核查数据集的评估表明,我们的培训前技术优于传统变压器技术,而传统变压器是多级推断任务的共同模型,在用于制定这些任务的判决-假设时,我们作为交叉编码。我们的代码和经过预先培训的模式在https://github.com/amazon-research/wqa-multive-gency-inference上发布。