Recent work on entity coreference resolution (CR) follows current trends in Deep Learning applied to embeddings and relatively simple task-related features. SOTA models do not make use of hierarchical representations of discourse structure. In this work, we leverage automatically constructed discourse parse trees within a neural approach and demonstrate a significant improvement on two benchmark entity coreference-resolution datasets. We explore how the impact varies depending upon the type of mention.
翻译:最近关于实体共同参照分辨率(CR)的工作遵循了深入学习中适用于嵌入和相对简单的任务相关特征的当前趋势。SOTA模型不使用对话结构的等级代表。在这项工作中,我们在神经学方法中利用自动构造的谈话剖析树,并展示了两个基准实体共同参照分辨率数据集的重大改进。我们探讨了这些影响如何因提及类型而异。