While recent research on natural language inference has considerably benefited from large annotated datasets, the amount of inference-related knowledge (including commonsense) provided in the annotated data is still rather limited. There have been two lines of approaches that can be used to further address the limitation: (1) unsupervised pretraining can leverage knowledge in much larger unstructured text data; (2) structured (often human-curated) knowledge has started to be considered in neural-network-based models for NLI. An immediate question is whether these two approaches complement each other, or how to develop models that can bring together their advantages. In this paper, we propose models that leverage structured knowledge in different components of pre-trained models. Our results show that the proposed models perform better than previous BERT-based state-of-the-art models. Although our models are proposed for NLI, they can be easily extended to other sentence or sentence-pair classification problems.
翻译:虽然最近关于自然语言推断的研究大大受益于大量附加说明的数据集,但附加说明的数据中提供的与推断有关的知识(包括常识)数量仍然相当有限,有两种方法可用于进一步解决限制问题:(1) 未经监督的预培训可以利用更大的非结构化文本数据的知识;(2) 结构化(往往是人造)知识已开始在基于神经网络的NLI模型中加以考虑。一个直接的问题是这两种方法是否相互补充,或如何开发能够汇集其优势的模型。在本文件中,我们提出了在预先培训的模型的不同组成部分中利用结构化知识的模式。我们的结果显示,拟议的模型比以前基于BERT的最先进的模型效果更好。虽然我们为NLI提出了模型,但它们很容易扩大到其他句子或句子分类问题。