Document-level Relation Extraction (RE) requires extracting relations expressed within and across sentences. Recent works show that graph-based methods, usually constructing a document-level graph that captures document-aware interactions, can obtain useful entity representations thus helping tackle document-level RE. These methods either focus more on the entire graph, or pay more attention to a part of the graph, e.g., paths between the target entity pair. However, we find that document-level RE may benefit from focusing on both of them simultaneously. Therefore, to obtain more comprehensive entity representations, we propose the \textbf{C}oarse-to-\textbf{F}ine \textbf{E}ntity \textbf{R}epresentation model (\textbf{CFER}) that adopts a coarse-to-fine strategy involving two phases. First, CFER uses graph neural networks to integrate global information in the entire graph at a coarse level. Next, CFER utilizes the global information as a guidance to selectively aggregate path information between the target entity pair at a fine level. In classification, we combine the entity representations from both two levels into more comprehensive representations for relation extraction. Experimental results on a large-scale document-level RE dataset show that CFER achieves better performance than previous baseline models. Further, we verify the effectiveness of our strategy through elaborate model analysis.
翻译:文档级 Relation Expliton (RE) 需要提取在句内和句间表达的关系。 最近的工作显示,基于图表的方法,通常是构建一个记录文件-感知互动的文档级图表,能够获得有用的实体代表,从而帮助解决文档级 RE。这些方法或者更侧重于整个图表,或者更加关注图中的一部分,例如目标实体对子之间的路径。然而,我们发现,文件级RE可能受益于同时关注两者。因此,为了获得更全面的实体代表,我们建议采用基于图表的方法,通常是构建一个记录文件-感知互动的文件级图表,从而获得有用的实体代表,从而可以帮助解决文件级 RE。这些方法或者更侧重于整个图表中的某个部分,例如目标实体对子之间的路径。因此,CFER利用全球信息作为指南,用以指导目标实体对子对子之间在精细层次上选择的综合路径信息。在对齐的模型中,我们通过更精确的模型,我们通过更精确的模型化化化的模型,将一个更精确的模型化的模型,我们通过更精确的模型化的模型,将实体级的大规模的模型整合到更精确的模型的模型,我们从前级的实验性的数据分析,从而实现更精确的大规模的模型。