Extracting relations across large text spans has been relatively underexplored in NLP, but it is particularly important for high-value domains such as biomedicine, where obtaining high recall of the latest findings is crucial for practical applications. Compared to conventional information extraction confined to short text spans, document-level relation extraction faces additional challenges in both inference and learning. Given longer text spans, state-of-the-art neural architectures are less effective and task-specific self-supervision such as distant supervision becomes very noisy. In this paper, we propose decomposing document-level relation extraction into relation detection and argument resolution, taking inspiration from Davidsonian semantics. This enables us to incorporate explicit discourse modeling and leverage modular self-supervision for each sub-problem, which is less noise-prone and can be further refined end-to-end via variational EM. We conduct a thorough evaluation in biomedical machine reading for precision oncology, where cross-paragraph relation mentions are prevalent. Our method outperforms prior state of the art, such as multi-scale learning and graph neural networks, by over 20 absolute F1 points. The gain is particularly pronounced among the most challenging relation instances whose arguments never co-occur in a paragraph.
翻译:在NLP中,大文本范围内的抽取关系相对没有得到充分探讨,但在生物医学等高价值领域,这尤其重要,因为生物医学等高价值领域对最新发现进行高度回顾对于实际应用至关重要。与局限于短文本间隔的常规信息提取相比,文件级关系提取在推断和学习方面面临着更多的挑战。由于文本间隔较长,最先进的神经神经结构效率较低,而远程监督等特定任务的自我监督也变得非常吵闹。在本文中,我们提议将文件级关系提取分解为关系探测和争论解析,从Davidsonian 语义学中汲取灵感。这使我们能够在每一个子问题中引入明确的讨论模型并利用模块自我监督的自我监督观点,因为每个子问题都不太易发出噪音,而且可以通过变异的EM进一步改进端到端。我们用生物医学机器阅读精确的精确度进行了彻底评价,因为交叉语系关系非常普遍。我们的方法超越了艺术的先前状态,例如多尺度学习和图形神经网络,从不具有挑战性的F1级观点。