In this paper, we reformulate the relation extraction task as mask language modeling and propose a novel adaptive prompt-based finetuning approach. We propose an adaptive label words selection mechanism that scatters the relation label into variable number of label tokens to handle the complex multiple label space. We further introduce an auxiliary entity discriminator object to encourage the model to focus on context representation learning. Extensive experiments on benchmark datasets demonstrate that our approach can achieve better performance on both the few-shot and supervised setting.
翻译:在本文中,我们重新将关系提取任务改写为掩码语言模型,并提议一种新的适应性快速微调方法。我们建议采用适应性标签词选择机制,将关系标签分散到不同数量的标签符号中,以处理复杂的多重标签空间。我们进一步引入辅助性实体歧视对象,鼓励模式侧重于背景代表性学习。关于基准数据集的广泛实验表明,我们的方法可以在少见和受监督的环境中取得更好的业绩。