An emerging trend in representation learning over knowledge graphs (KGs) moves beyond transductive link prediction tasks over a fixed set of known entities in favor of inductive tasks that imply training on one graph and performing inference over a new graph with unseen entities. In inductive setups, node features are often not available and training shallow entity embedding matrices is meaningless as they cannot be used at inference time with unseen entities. Despite the growing interest, there are not enough benchmarks for evaluating inductive representation learning methods. In this work, we introduce ILPC 2022, a novel open challenge on KG inductive link prediction. To this end, we constructed two new datasets based on Wikidata with various sizes of training and inference graphs that are much larger than existing inductive benchmarks. We also provide two strong baselines leveraging recently proposed inductive methods. We hope this challenge helps to streamline community efforts in the inductive graph representation learning area. ILPC 2022 follows best practices on evaluation fairness and reproducibility, and is available at https://github.com/pykeen/ilpc2022.
翻译:在对知识图(KGs)进行代表式学习方面,正在出现的趋势超越了对一组固定的已知实体进行感官联系预测的任务,而不只是对一组固定的已知实体进行感官联系的预测任务,这种任务意味着对一个图形进行培训,并对与隐形实体的新图表进行推导。在感官设置方面,往往没有节点特征,而培训浅实体嵌入矩阵则毫无意义,因为它们无法在与隐形实体的推论时间使用。尽管人们日益感兴趣,但评价感官代表性学习方法的基准却不足。在这项工作中,我们引入了ILPC 2022,这是对KG感官联系预测的新的公开挑战。为此,我们根据维基数据建立了两个新的数据集,其培训规模和推断图比现有的感官基准大得多。我们还提供了两个强有力的基线,用以利用最近提出的感官方法。我们希望这一挑战有助于简化在感官代表学习领域的社区努力。ILPC 2022遵循关于评价公正性和可复制性的最佳做法,可在https://github.com/pyke/ilp2220。