Representation learning models for Knowledge Graphs (KG) have proven to be effective in encoding structural information and performing reasoning over KGs. In this paper, we propose a novel pre-training-then-fine-tuning framework for knowledge graph representation learning, in which a KG model is firstly pre-trained with triple classification task, followed by discriminative fine-tuning on specific downstream tasks such as entity type prediction and entity alignment. Drawing on the general ideas of learning deep contextualized word representations in typical pre-trained language models, we propose SCoP to learn pre-trained KG representations with structural and contextual triples of the target triple encoded. Experimental results demonstrate that fine-tuning SCoP not only outperforms results of baselines on a portfolio of downstream tasks but also avoids tedious task-specific model design and parameter training.
翻译:实践证明,知识图的代表性学习模式在对结构信息进行编码和对知识图教学进行推理方面是有效的。在本文中,我们提议为知识图教学提出一个新的培训前-培训前-培训前-调整框架,首先对知识图教学模式进行三重分类任务的培训,然后对具体下游任务,如实体类型预测和实体调整进行歧视性的微调。我们建议,根据在典型的预先培训语言模型中学习深层次背景化的文字表述的一般想法,科学图组学习预先培训的KG表示,并学习目标三重编码的结构性和环境性三重。实验结果表明,微调战略图组不仅超越了下游任务组合的基线结果,而且避免了繁琐的任务特定模型设计和参数培训。