Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks. Prior methods have been focused on overcoming this problem on convolutional neural networks (CNNs), where the input samples like images lie in a grid domain, but have largely overlooked graph neural networks (GNNs) that handle non-grid data. In this paper, we propose a novel scheme dedicated to overcoming catastrophic forgetting problem and hence strengthen continual learning in GNNs. At the heart of our approach is a generic module, termed as topology-aware weight preserving~(TWP), applicable to arbitrary form of GNNs in a plug-and-play fashion. Unlike the main stream of CNN-based continual learning methods that rely on solely slowing down the updates of parameters important to the downstream task, TWP explicitly explores the local structures of the input graph, and attempts to stabilize the parameters playing pivotal roles in the topological aggregation. We evaluate TWP on different GNN backbones over several datasets, and demonstrate that it yields performances superior to the state of the art. Code is publicly available at \url{https://github.com/hhliu79/TWP}.
翻译:灾难性的遗忘是指神经网络“忘记”先前在学习新任务时学到的知识的“忘记”的倾向。 先前的方法一直侧重于在神经神经网络(CNNs)上克服这一问题, 前者的输入样本,如图像等,存在于网格域域,但基本上被忽略, 处理非网格数据的图形神经网络(GNNS)处理非网格数据。 在本文中, 我们提出了一个新的计划, 致力于克服灾难性的遗忘问题, 从而增强GNNs的持续学习。 我们的方法核心是一个通用模块, 被称为“ 表面认知重量保护~ (TWP) ”, 适用于任意形式的GNNs, 以插接和播放方式应用。 与基于CNN的持续学习方法的主要流不同, 后者仅依靠减缓对下游任务至关重要的参数更新。 TWP 明确探索输入图的本地结构, 并试图稳定在顶层聚合中扮演关键角色的参数。 我们评估了多个数据集上不同的GNN骨骨的TWP, 并表明其性表现优于艺术状态。