In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario. The primary objective of the analysis is to understand whether classical continual learning techniques for flat and sequential data have a tangible impact on performances when applied to graph data. To do so, we experiment with a structure-agnostic model and a deep graph network in a robust and controlled environment on three different datasets. The benchmark is complemented by an investigation on the effect of structure-preserving regularization techniques on catastrophic forgetting. We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization. Our findings suggest interesting future research at the intersection of the continual and graph representation learning fields. Finally, we provide researchers with a flexible software framework to reproduce our results and carry out further experiments.
翻译:在这项工作中,我们研究了图示代表性学习情景中的灾难性遗忘现象。分析的主要目的是了解用于平板和顺序数据的经典持续学习技术在应用图形数据时是否对性能产生实际影响。为此,我们在三个不同的数据集的强大和受控环境中试验结构-不可知模型和深图网络。该基准还辅之以对结构-保护规范化技术对灾难性遗忘的影响的调查。我们发现,重播是迄今为止最有效的战略,也最能得益于正规化。我们的调查结果表明,未来在连续和图表代表性学习领域的交叉点进行有趣的研究。最后,我们为研究人员提供了一个灵活的软件框架,以复制我们的成果并进行进一步的实验。