Despite significant advances in graph representation learning, little attention has been paid to the more practical continual learning scenario in which new categories of nodes (e.g., new research areas in citation networks, or new types of products in co-purchasing networks) and their associated edges are continuously emerging, causing catastrophic forgetting on previous categories. Existing methods either ignore the rich topological information or sacrifice plasticity for stability. To this end, we present Hierarchical Prototype Networks (HPNs) which extract different levels of abstract knowledge in the form of prototypes to represent the continuously expanded graphs. Specifically, we first leverage a set of Atomic Feature Extractors (AFEs) to encode both the elemental attribute information and the topological structure of the target node. Next, we develop HPNs to adaptively select relevant AFEs and represent each node with three levels of prototypes. In this way, whenever a new category of nodes is given, only the relevant AFEs and prototypes at each level will be activated and refined, while others remain uninterrupted to maintain the performance over existing nodes. Theoretically, we first demonstrate that the memory consumption of HPNs is bounded regardless of how many tasks are encountered. Then, we prove that under mild constraints, learning new tasks will not alter the prototypes matched to previous data, thereby eliminating the forgetting problem. The theoretical results are supported by experiments on five datasets, showing that HPNs not only outperform state-of-the-art baseline techniques but also consume relatively less memory.
翻译:尽管在图表代表性学习方面取得了显著进步,但很少注意更实际的持续学习情景,在这些情景中,新的节点类别(例如,引用网络中的新研究领域,或共同采购网络中的新产品类型)及其相关边缘不断出现,造成以往类别中的灾难性遗忘。现有方法要么忽视丰富的地形信息,要么牺牲可塑性以保持稳定。为此,我们介绍等级式原型网络(HPNs),以原型形式提取不同程度的抽象知识,以代表不断扩展的图表。具体地说,我们首先利用一套原子地貌提取技术(AFES)来编码目标节点的元素属性信息或表面结构。接下来,我们开发HPN,以适应性地选择相关的亚经型信息,并以三个原型的形式代表每个节点。这样,只要给出新的节点类别,只有每个级别的相关AFES和原型才会被激活和精细化,而另一些则保持不间断地保持现有节点的运行状态。理论上,我们首先利用一组原子地提取一套原子地精度精度的提取技术,然后,我们再学习新的HPN的原型数据,我们如何在以往的实验中,从而将重新解读地展示。