Knowledge graphs contain rich knowledge about various entities and the relational information among them, while temporal knowledge graphs (TKGs) describe and model the interactions of the entities over time. In this context, automatic temporal knowledge graph completion (TKGC) has gained great interest. Recent TKGC methods integrate advanced deep learning techniques, e.g., Transformers, and achieve superior model performance. However, this also introduces a large number of excessive parameters, which brings a heavier burden for parameter optimization. In this paper, we propose a simple but powerful graph encoder for TKGC, called TARGCN. TARGCN is parameter-efficient, and it extensively explores every entity's temporal context for learning contextualized representations. We find that instead of adopting various kinds of complex modules, it is more beneficial to efficiently capture the temporal contexts of entities. We experiment TARGCN on three benchmark datasets. Our model can achieve a more than 46% relative improvement on the GDELT dataset compared with state-of-the-art TKGC models. Meanwhile, it outperforms the strongest baseline on the ICEWS05-15 dataset with around 18% fewer parameters.
翻译:知识图表包含关于各种实体及其相互关系信息的丰富知识,而时间知识图表(TKGs)描述和模拟各实体在一段时间内的互动关系。在这方面,自动时间知识图完成(TKGC)引起了极大的兴趣。最近的传统知识GC方法结合了先进的深层次学习技术,例如变换器,并取得了优异的模型性能。然而,这也引入了大量过多的参数,给参数优化带来更重的负担。在本文中,我们提议为TKGC(称为TRARCN)提供一个简单但强大的图形编码器。TARGCN(TARGCN)具有参数效率,它广泛探讨每个实体学习背景化的表达方式的时间环境。我们发现,与采用各种复杂的模块相比,有效捕捉实体的时间环境更为有益。我们在三个基准数据集上试验TRGCN(TARGCN),我们的模型可以比GDELT数据集比最新科技的TRGC模型改进46%以上。同时,它比ICEEWS05-15数据集的最强的基线值要低18 %。