We cast a suite of information extraction tasks into a text-to-triple translation framework. Instead of solving each task relying on task-specific datasets and models, we formalize the task as a translation between task-specific input text and output triples. By taking the task-specific input, we enable a task-agnostic translation by leveraging the latent knowledge that a pre-trained language model has about the task. We further demonstrate that a simple pre-training task of predicting which relational information corresponds to which input text is an effective way to produce task-specific outputs. This enables the zero-shot transfer of our framework to downstream tasks. We study the zero-shot performance of this framework on open information extraction (OIE2016, NYT, WEB, PENN), relation classification (FewRel and TACRED), and factual probe (Google-RE and T-REx). The model transfers non-trivially to most tasks and is often competitive with a fully supervised method without the need for any task-specific training. For instance, we significantly outperform the F1 score of the supervised open information extraction without needing to use its training set.
翻译:我们把一套信息提取任务放入一个文本到三重翻译框架。 我们不依靠具体任务数据集和模型解决每项任务,而是将任务正式化为任务特定投入文本和输出三重之间的翻译。 我们通过采用具体任务投入,能够利用经过培训的语言模型对任务的潜在知识,实现任务机密翻译。 我们进一步证明, 简单的培训前任务, 即预测哪些相关信息对应于哪些相关信息, 哪些输入文本是产生具体任务产出的有效方法。 这使得我们的框架能够零投向下游任务。 我们研究这一框架在公开信息提取( OIE2016, NYT, WEB, PENN), 关系分类(Fewrel和TACRED)和事实勘测(Google-RE和T-REx)方面的零弹分数表现, 而无需使用其培训设置。 模型向大多数任务转移非三重任务,而且往往具有竞争性,无需任何特定任务的培训。 例如,我们大大超出监督的公开信息提取的F1分数。