Document-level Relation Extraction (RE) requires extracting relations expressed within and across sentences. Recent works show that graph-based methods, usually constructing a document-level graph that captures document-aware interactions, can obtain useful entity representations thus helping tackle document-level RE. These methods either focus more on the entire graph, or pay more attention to a part of the graph, e.g., paths between the target entity pair. However, we find that document-level RE may benefit from focusing on both of them simultaneously. Therefore, to obtain more comprehensive entity representations, we propose the Coarse-to-Fine Entity Representation model (CFER) that adopts a coarse-to-fine strategy involving two phases. First, CFER uses graph neural networks to integrate global information in the entire graph at a coarse level. Next, CFER utilizes the global information as a guidance to selectively aggregate path information between the target entity pair at a fine level. In classification, we combine the entity representations from both two levels into more comprehensive representations for relation extraction. Experimental results on two document-level RE datasets, DocRED and CDR, show that CFER outperforms existing models and is robust to the uneven label distribution.
翻译:最近的工程显示,基于图表的方法,通常是构建一个反映文件意识相互作用的文件水平图,能够获得有用的实体代表,从而帮助处理文件水平的RE。这些方法或者更侧重于整个图表,或者更多地关注图中的一部分,例如目标实体对对子之间的路径。然而,我们认为,文件水平RE可能受益于同时关注两者,因此,为了获得更全面的实体代表,我们提议采用粗到粗实体代表制模式,采用包含两个阶段的粗到粗实体代表制战略。首先,CFER利用图形神经网络将全球信息纳入整个图表的粗结构。接着,CFER利用全球信息作为指南,有选择地将目标实体对子对子之间的路径信息集中到精细层次。在分类中,我们将两个级别的实体代表制合并为更全面的实体代表制,以便获得更全面的实体代表制,因此,我们提议在两个文件水平的RED和CDR这两个数据集上采用粗略的实验结果,显示CFER现有模型是稳健的分布式。