Mining causality from text is a complex and crucial natural language understanding task corresponding to the human cognition. Existing studies at its solution can be grouped into two primary categories: feature engineering based and neural model based methods. In this paper, we find that the former has incomplete coverage and inherent errors but provide prior knowledge; while the latter leverages context information but causal inference of which is insufficiency. To handle the limitations, we propose a novel causality detection model named MCDN to explicitly model causal reasoning process, and furthermore, to exploit the advantages of both methods. Specifically, we adopt multi-head self-attention to acquire semantic feature at word level and develop the SCRN to infer causality at segment level. To the best of our knowledge, with regards to the causality tasks, this is the first time that the Relation Network is applied. The experimental results show that: 1) the proposed approach performs prominent performance on causality detection; 2) further analysis manifests the effectiveness and robustness of MCDN.
翻译:文本的采矿因果关系是一项与人类认知相对应的复杂和关键的自然语言理解任务。解决方案的现有研究可分为两大类:地貌工程和神经模型方法。在本文件中,我们发现前者的覆盖范围不完整,存在固有的错误,但提供了先前的知识;后者利用了背景信息,但因果推论却不充分。为了处理这些局限性,我们提议了一个名为MCDN的新颖的因果关系检测模型,以明确模拟因果关系推理过程,并且利用这两种方法的优势。具体地说,我们采用了多头自省方法,以获得文字层面的语义特征,并开发SCRN,以推断部分层面的因果关系。就我们的知识而言,关于因果关系任务,这是首次适用关系网络。实验结果显示:(1) 拟议的方法在因果关系检测方面表现突出的表现;(2) 进一步分析显示MCDN的有效性和稳健性。