Extracting temporal relations (e.g., before, after, and simultaneous) among events is crucial to natural language understanding. One of the key challenges of this problem is that when the events of interest are far away in text, the context in-between often becomes complicated, making it challenging to resolve the temporal relationship between them. This paper thus proposes a new Syntax-guided Graph Transformer network (SGT) to mitigate this issue, by (1) explicitly exploiting the connection between two events based on their dependency parsing trees, and (2) automatically locating temporal cues between two events via a novel syntax-guided attention mechanism. Experiments on two benchmark datasets, MATRES and TB-Dense, show that our approach significantly outperforms previous state-of-the-art methods on both end-to-end temporal relation extraction and temporal relation classification; This improvement also proves to be robust on the contrast set of MATRES. The code is publicly available at https://github.com/VT-NLP/Syntax-Guided-Graph-Transformer.
翻译:在事件之间提取时间关系(例如,在事件之前、之后和同时)对于自然语言理解至关重要。这一问题的主要挑战之一是,当感兴趣的事件在文本上远远遥远时,两者之间的背景往往变得复杂,从而难以解决它们之间的时间关系。因此,本文件提议建立一个新的语法指导图表变异网络(SGT)来缓解这一问题,其方法是:(1) 明确利用两个事件之间的联系,以其依赖性对树进行剖析,(2) 通过新的语法引导关注机制自动在两个事件之间找到时间提示。关于两个基准数据集(MATRES和TB-Dense)的实验表明,我们的方法大大超越了以往在端到端时间关系提取和时间关系分类方面的最新方法;这一改进也证明在MATRES的对比数据集上是强有力的。该代码在https://github.com/VT-NLP/Syntax-Guided-Graph-Transferf中可以公开查阅。