Graph embedding (GE) methods embed nodes (and/or edges) in graph into a low-dimensional semantic space, and have shown its effectiveness in modeling multi-relational data. However, existing GE models are not practical in real-world applications since it overlooked the streaming nature of incoming data. To address this issue, we study the problem of continual graph representation learning which aims to continually train a GE model on new data to learn incessantly emerging multi-relational data while avoiding catastrophically forgetting old learned knowledge. Moreover, we propose a disentangle-based continual graph representation learning (DiCGRL) framework inspired by the human's ability to learn procedural knowledge. The experimental results show that DiCGRL could effectively alleviate the catastrophic forgetting problem and outperform state-of-the-art continual learning models.
翻译:图表嵌入(GE)方法将图中的节点(和/或边缘)嵌入一个低维的语义空间,并表明其在建模多关系数据方面的有效性。然而,现有的GE模型在现实世界应用中并不实用,因为它忽视了所收到数据流的性质。为解决这一问题,我们研究了连续的图形表达学习问题,其目的是不断对GE模型进行关于新数据的培训,以学习不断出现的多关系数据,同时避免灾难性地忘记老有学的知识。此外,我们提议了一个以人类学习程序知识的能力为灵感的不相干连续图形表达学习框架。实验结果表明,DICGRL可以有效地减轻灾难性的遗忘问题,超越最先进的持续学习模式。