Pre-trained language models have contributed significantly to relation extraction by demonstrating remarkable few-shot learning abilities. However, prompt tuning methods for relation extraction may still fail to generalize to those rare or hard patterns. Note that the previous parametric learning paradigm can be viewed as memorization regarding training data as a book and inference as the close-book test. Those long-tailed or hard patterns can hardly be memorized in parameters given few-shot instances. To this end, we regard RE as an open-book examination and propose a new semiparametric paradigm of retrieval-enhanced prompt tuning for relation extraction. We construct an open-book datastore for retrieval regarding prompt-based instance representations and corresponding relation labels as memorized key-value pairs. During inference, the model can infer relations by linearly interpolating the base output of PLM with the non-parametric nearest neighbor distribution over the datastore. In this way, our model not only infers relation through knowledge stored in the weights during training but also assists decision-making by unwinding and querying examples in the open-book datastore. Extensive experiments on benchmark datasets show that our method can achieve state-of-the-art in both standard supervised and few-shot settings. Code are available in https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE.
翻译:受过训练的语文模式通过展示不同寻常的微小学习能力,大大促进了关系提取。然而,关于关系提取的快速调试方法可能仍然无法概括这些罕见或难选模式。注意到以前的参数学习模式可以被视为将培训数据作为书籍和推论作为近距离测试的记忆。这些长尾或硬型模式很难在提供少发实例的参数中被混为一数。在这方面,我们认为RE是一种公开书籍考试,并提出了一个新的半参数模式,用于检索和增强关系提取的快速调试。我们建造了一个开放式数据库,用于检索基于快速实例的表达和对应关系标签,作为记忆式的关键值配对。在推论中,模型可以通过线性地将PLM的基础输出与数据储存处上非参数最近的邻居分布相推导出关系。我们的模式不仅通过在培训中储存的知识来推断关系,而且还通过不中流和查询的示例来协助决策。在公开的K级数据库/在线数据库中,可以显示我们现有的标准数据设置中的一些数据库/数据库。