Recent work has shown success in incorporating pre-trained models like BERT to improve NLP systems. However, existing pre-trained models lack of causal knowledge which prevents today's NLP systems from thinking like humans. In this paper, we investigate the problem of injecting causal knowledge into pre-trained models. There are two fundamental problems: 1) how to collect various granularities of causal pairs from unstructured texts; 2) how to effectively inject causal knowledge into pre-trained models. To address these issues, we extend the idea of CausalBERT from previous studies, and conduct experiments on various datasets to evaluate its effectiveness. In addition, we adopt a regularization-based method to preserve the already learned knowledge with an extra regularization term while injecting causal knowledge. Extensive experiments on 7 datasets, including four causal pair classification tasks, two causal QA tasks and a causal inference task, demonstrate that CausalBERT captures rich causal knowledge and outperforms all pre-trained models-based state-of-the-art methods, achieving a new causal inference benchmark.
翻译:最近的工作表明,在将诸如BERT等预先培训的模型纳入改进NLP系统方面,取得了成功。然而,现有的事先培训的模型缺乏因果关系知识,使得今天的NLP系统无法像人类一样思考。在本文件中,我们调查了将因果关系知识注入预先培训的模型的问题。有两个基本问题:(1) 如何从未结构化的文本中收集各种因果配方的颗粒;(2) 如何有效地将因果知识注入预先培训的模型。为了解决这些问题,我们从以往的研究中推广了CausalBERT的想法,并对各种数据集进行了实验,以评价其有效性。此外,我们采用了基于正规化的方法,以额外的正规化术语来保存已经学到的知识,同时注入因果知识。关于7个数据集的广泛实验,包括4个因果配对分类任务、两个因果质质量控制任务和因果推断任务,表明CausalBERT收集了丰富的因果知识,并超越了所有事先培训的模型的状态方法,从而得出了新的因果推断基准。