We present a sequential model for temporal relation classification between intra-sentence events. The key observation is that the overall syntactic structure and compositional meanings of the multi-word context between events are important for distinguishing among fine-grained temporal relations. Specifically, our approach first extracts a sequence of context words that indicates the temporal relation between two events, which well align with the dependency path between two event mentions. The context word sequence, together with a parts-of-speech tag sequence and a dependency relation sequence that are generated corresponding to the word sequence, are then provided as input to bidirectional recurrent neural network (LSTM) models. The neural nets learn compositional syntactic and semantic representations of contexts surrounding the two events and predict the temporal relation between them. Evaluation of the proposed approach on TimeBank corpus shows that sequential modeling is capable of accurately recognizing temporal relations between events, which outperforms a neural net model using various discrete features as input that imitates previous feature based models.
翻译:我们为判决期间事件之间的时间关系分类提出了一个顺序模式。关键观察是,事件间多字背景的总体综合结构和构成含义对于区分细微的时际关系十分重要。具体地说,我们的方法首先提取一个上下文词序列,表明两个事件之间的时间关系,这与两个事件之间的依赖路径非常吻合。上下文字序列,连同一个部分语音标签序列和与单词序列相对应的依附关系序列,随后作为双向经常性神经网络(LSTM)模型的投入提供。神经网学习两个事件周围环境的构成性合成和语义表达,并预测它们之间的时间关系。对TimeBankpory的拟议方法的评估表明,顺序建模能够准确识别事件之间的时间关系,这些事件利用与先前基于特征的模式相似的投入,超越了神经网模型。