The typical way for relation extraction is fine-tuning large pre-trained language models on task-specific datasets, then selecting the label with the highest probability of the output distribution as the final prediction. However, the usage of the Top-k prediction set for a given sample is commonly overlooked. In this paper, we first reveal that the Top-k prediction set of a given sample contains useful information for predicting the correct label. To effectively utilizes the Top-k prediction set, we propose Label Graph Network with Top-k Prediction Set, termed as KLG. Specifically, for a given sample, we build a label graph to review candidate labels in the Top-k prediction set and learn the connections between them. We also design a dynamic $k$-selection mechanism to learn more powerful and discriminative relation representation. Our experiments show that KLG achieves the best performances on three relation extraction datasets. Moreover, we observe that KLG is more effective in dealing with long-tailed classes.
翻译:关系提取的典型方法是微调特定任务数据集的大型预先培训语言模型,然后选择输出分布概率最高的标签作为最终预测。 但是,通常忽略了为特定样本使用Top-k预测集的情况。 在本文中,我们首先发现,给定样本的Top-k预测集含有正确标签的有用信息。 为了有效利用Top-k预测集, 我们提议使用Top-k预测集的Label图形网络, 称为KLG。 具体地说, 对于一个特定样本, 我们建立一个标签图, 以审查Top- k预测集中的候选人标签, 并学习它们之间的联系。 我们还设计了一个动态的 $k$的选择机制, 以学习更强大、 歧视性的关系表达方式。 我们的实验显示, KLG 在三个相关提取数据集上达到最佳性能。 此外, 我们观察到 KLG 在处理长尾类时更有效 。