Relational structure extraction covers a wide range of tasks and plays an important role in natural language processing. Recently, many approaches tend to design sophisticated graphical models to capture the complex relations between objects that are described in a sentence. In this work, we demonstrate that simple tagging models can surprisingly achieve competitive performances with a small trick -- priming. Tagging models with priming append information about the operated objects to the input sequence of pretrained language model. Making use of the contextualized nature of pretrained language model, the priming approach help the contextualized representation of the sentence better embed the information about the operated objects, hence, becomes more suitable for addressing relational structure extraction. We conduct extensive experiments on three different tasks that span ten datasets across five different languages, and show that our model is a general and effective model, despite its simplicity. We further carry out comprehensive analysis to understand our model and propose an efficient approximation to our method, which can perform almost the same performance but with faster inference speed.
翻译:在自然语言处理过程中,关系结构的提取涉及广泛的任务和发挥重要作用。最近,许多方法倾向于设计复杂的图形模型,以捕捉句子中描述的物体之间的复杂关系。在这项工作中,我们证明简单的标记模型能够以一个小把戏 -- -- 尖锐的技巧 -- -- 实现竞争性表现。将操作对象的信息与预先培训的语言模型输入序列相连接的模型进行粘贴。利用预先培训的语言模型的背景性,边缘方法帮助该句的背景化表达方式更好地嵌入关于操作对象的信息,因此,更适合处理关系结构的提取。我们广泛试验了三个不同的任务,跨越了五种不同语言的十套数据,并表明我们的模型尽管简单,但是一个一般和有效的模型。我们进一步进行全面分析,以理解我们的模型,并提出一种高效的近似方法,它几乎可以进行同样的工作,但可以更快地推断速度。