While knowledge graphs contain rich semantic knowledge of various entities and the relational information among them, temporal knowledge graphs (TKGs) further indicate the interactions of the entities over time. To study how to better model TKGs, automatic temporal knowledge graph completion (TKGC) has gained great interest. Recent TKGC methods aim to integrate advanced deep learning techniques, e.g., attention mechanism and Transformer, to boost model performance. However, we find that compared to adopting various kinds of complex modules, it is more beneficial to better utilize the whole amount of temporal information along the time axis. In this paper, we propose a simple but powerful graph encoder TARGCN for TKGC. TARGCN is parameter-efficient, and it extensively utilizes the information from the whole temporal context. We perform experiments on three benchmark datasets. Our model can achieve a more than 42% relative improvement on GDELT dataset compared with the state-of-the-art model. Meanwhile, it outperforms the strongest baseline on ICEWS05-15 dataset with around 18.5% fewer parameters.
翻译:虽然知识图表包含不同实体的丰富的语义知识及其相互关系信息,但时间知识图(TKGs)进一步表明各实体在一段时间内的互动关系。为了研究如何更好地模拟TKGs,自动时间知识图的完成(TKGC)获得了极大的兴趣。最近,TKGC方法旨在整合先进的深层学习技术,例如注意力机制和变异器,以提高模型性能。然而,我们发现,与采用各种复杂模块相比,更好地利用沿时间轴的整个时间信息量更为有益。在本文件中,我们为TKGC提出一个简单而有力的图形编码器(TRGCN),TARGCN具有参数效率,并广泛利用整个时间背景下的信息。我们在三个基准数据集上进行了实验。我们的模型可以比最新模型在GDELT数据集上实现42%以上的相对改进。同时,它比ICEWS05-15数据集上最强的基线值要高,比18.5%的参数要低。