Document-level relation extraction aims to extract relations among multiple entity pairs from a document. Previously proposed graph-based or transformer-based models utilize the entities independently, regardless of global information among relational triples. This paper approaches the problem by predicting an entity-level relation matrix to capture local and global information, parallel to the semantic segmentation task in computer vision. Herein, we propose a Document U-shaped Network for document-level relation extraction. Specifically, we leverage an encoder module to capture the context information of entities and a U-shaped segmentation module over the image-style feature map to capture global interdependency among triples. Experimental results show that our approach can obtain state-of-the-art performance on three benchmark datasets DocRED, CDR, and GDA.
翻译:文件级关系提取旨在从一份文件中提取多对实体之间的关系。以前提议的基于图形的模型或基于变压器的模型独立利用各个实体,而不论三重关系之间的全球信息。本文通过预测一个实体级关系矩阵来捕捉当地和全球信息来解决这一问题,这与计算机愿景中的语义分解任务平行。在这里,我们提议了一个用于文件级关系提取的文档U形网络。具体地说,我们利用一个编码模块来捕捉实体的背景信息,并在图像式特征图上使用一个U型分解模块来捕捉三重关系之间的全球相互依存关系。实验结果显示,我们的方法可以在三个基准数据集DocRED、CDR和GDA上取得最新的最新业绩。