While pre-trained language models (PLMs) are the go-to solution to tackle many natural language processing problems, they are still very limited in their ability to capture and to use common-sense knowledge. In fact, even if information is available in the form of approximate (soft) logical rules, it is not clear how to transfer it to a PLM in order to improve its performance for deductive reasoning tasks. Here, we aim to bridge this gap by teaching PLMs how to reason with soft Horn rules. We introduce a classification task where, given facts and soft rules, the PLM should return a prediction with a probability for a given hypothesis. We release the first dataset for this task, and we propose a revised loss function that enables the PLM to learn how to predict precise probabilities for the task. Our evaluation results show that the resulting fine-tuned models achieve very high performance, even on logical rules that were unseen at training. Moreover, we demonstrate that logical notions expressed by the rules are transferred to the fine-tuned model, yielding state-of-the-art results on external datasets.
翻译:虽然预先培训的语言模型(PLM)是解决许多自然语言处理问题的捷径解决方案,但它们捕捉和使用常识知识的能力仍然非常有限。事实上,即使信息以近似(软)逻辑规则的形式提供,但还不清楚如何将其转移到一个PLM,以改进推理任务的绩效。在这里,我们的目标是通过教授PLM如何以软性角规则来弥补这一差距。我们引入了分类任务,根据事实和软性规则,PLM应该返回预测,并有可能做出某种假设。我们公布了这项任务的第一批数据集,并提出了一个订正的损失功能,使PLM能够学习如何预测任务的准确概率。我们的评价结果表明,由此产生的精细调整模型取得了很高的性能,即使是在培训时看不到的逻辑规则上也是如此。此外,我们证明,规则表达的逻辑概念被转移到了精确调整的模式,在外部数据集上产生了最新的结果。