Real-world knowledge graphs are often characterized by low-frequency relations - a challenge that has prompted an increasing interest in few-shot link prediction methods. These methods perform link prediction for a set of new relations, unseen during training, given only a few example facts of each relation at test time. In this work, we perform a systematic study on a spectrum of models derived by generalizing the current state of the art for few-shot link prediction, with the goal of probing the limits of learning in this few-shot setting. We find that a simple zero-shot baseline - which ignores any relation-specific information - achieves surprisingly strong performance. Moreover, experiments on carefully crafted synthetic datasets show that having only a few examples of a relation fundamentally limits models from using fine-grained structural information and only allows for exploiting the coarse-grained positional information of entities. Together, our findings challenge the implicit assumptions and inductive biases of prior work and highlight new directions for research in this area.
翻译:现实世界知识图往往以低频关系为特征,这一挑战促使人们越来越关注几发链接的预测方法。这些方法对一系列新关系进行链接预测,这些新关系在培训期间不为人知,只考虑到试验时每种关系的几个实例。在这项工作中,我们对一系列模型进行系统研究,这些模型通过对短发链接的预测概括化最新状态而得出,目的是在这一短发环境中探究学习的局限性。我们发现,一个简单的零发基线(忽视任何特定关系的信息)取得了惊人的强效。此外,对精心制作的合成数据集的实验表明,在使用精细结构信息方面,只有少数几个基本限制关系模型的例子,只能利用实体的粗微定位信息。我们的调查结果共同挑战了先前工作的隐含的假设和暗示偏差,并突显了该领域研究的新方向。