Open Information Extraction (OpenIE) facilitates the open-domain discovery of textual facts. However, the prevailing solutions evaluate OpenIE models on in-domain test sets aside from the training corpus, which certainly violates the initial task principle of domain-independence. In this paper, we propose to advance OpenIE towards a more realistic scenario: generalizing over unseen target domains with different data distributions from the source training domains, termed Generalized OpenIE. For this purpose, we first introduce GLOBE, a large-scale human-annotated multi-domain OpenIE benchmark, to examine the robustness of recent OpenIE models to domain shifts, and the relative performance degradation of up to 70% implies the challenges of generalized OpenIE. Then, we propose DragonIE, which explores a minimalist graph expression of textual fact: directed acyclic graph, to improve the OpenIE generalization. Extensive experiments demonstrate that DragonIE beats the previous methods in both in-domain and out-of-domain settings by as much as 6.0% in F1 score absolutely, but there is still ample room for improvement.
翻译:开放信息提取( OpenInformation Development, OpenIE) 有助于公开发现文本事实。 然而,目前的解决办法是评估OpenIE在域内测试的模型,这些模型从培训材料中分离出来,这当然违反了域独立的初步任务原则。 在本文件中,我们提议推动OpenIE, 走向更现实的情景:在来源培训领域(称为通用Openific OpenIE)提供不同数据分布的无形目标域上进行普及。 为此,我们首先引入了GlobE, 这是一种大型人文附加说明的多域域域内测试基准, 以审查最近的OpenIE 模型在域转移方面的稳健性, 以及高达70%的相对性能退化, 意味着通用OpenIE 的挑战。 然后,我们提议, 我们提议, GunaIE, 探索一个最小的文本图形表达方式: 指环形图, 来改进OpenIE 的概观。 广泛实验表明, NAIE 在部内外环境上和外环境的先前的方法比F1分得6.多至6.0%,但仍然有足够的改进空间空间。