Natural Language Processing tasks such as resolving the coreference of events require understanding the relations between two text snippets. These tasks are typically formulated as (binary) classification problems over independently induced representations of the text snippets. In this work, we develop a Pairwise Representation Learning (PairwiseRL) scheme for the event mention pairs, in which we jointly encode a pair of text snippets so that the representation of each mention in the pair is induced in the context of the other one. Furthermore, our representation supports a finer, structured representation of the text snippet to facilitate encoding events and their arguments. We show that PairwiseRL, despite its simplicity, outperforms the prior state-of-the-art event coreference systems on both cross-document and within-document event coreference benchmarks. We also conduct in-depth analysis in terms of the improvement and the limitation of pairwise representation so as to provide insights for future work.
翻译:自然语言处理任务,如解决事件关联等,需要理解两个文本片断之间的关系。这些任务通常被写成(二进制)分类问题,而不是单独诱导的文本片断的表达。在这项工作中,我们为事件对称设计了一个对称演示学习(Pairwith Inproducation Learning)(Pairwise Learning)(PairwiseRL)计划,在其中我们共同编码了一对文本片断,这样一对对中每个字片段的表述就能够在另一对文中产生。此外,我们的代表支持对文本片段进行更精细、结构化的表述,以便利编码事件及其论点。我们表明,PairwixyRL尽管简单,但是在交叉文档和文件内事件共同参照基准方面都超越了以前最先进的事件共同参照系统。我们还从改进和限制对称代表的角度进行深入分析,以便为未来工作提供深刻的见解。