Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points, without forgetting knowledge of old classes. The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbates the notorious catastrophic forgetting problems. However, existing FSCIL methods ignore the semantic relationships between sample-level and class-level. % Using the advantage that graph neural network (GNN) can mine rich information among few samples, In this paper, we designed a two-level graph network for FSCIL named Sample-level and Class-level Graph Neural Network (SCGN). Specifically, a pseudo incremental learning paradigm is designed in SCGN, which synthesizes virtual few-shot tasks as new tasks to optimize SCGN model parameters in advance. Sample-level graph network uses the relationship of a few samples to aggregate similar samples and obtains refined class-level features. Class-level graph network aims to mitigate the semantic conflict between prototype features of new classes and old classes. SCGN builds two-level graph networks to guarantee the latent semantic of each few-shot class can be effectively represented in FSCIL. Experiments on three popular benchmark datasets show that our method significantly outperforms the baselines and sets new state-of-the-art results with remarkable advantages.
翻译:Few-shot类增量学习(FSCIL)旨在设计能够从少量数据中持续学习新概念的机器学习算法,而不会忘记旧类别的知识。困难在于新类别的有限数据不仅会导致严重的过拟合问题,而且会加剧臭名昭着的灾难性遗忘问题。然而,现有的FSCIL方法忽略了样本层次和类别层次之间的语义关系。本文使用图神经网络(GNN)挖掘少量样本之间的丰富信息,为FSCIL设计了一个名为样本级和类级图神经网络(SCGN)的两级图网络。具体来说,SCGN设计了一种伪增量学习范式,将虚拟的Few-shot任务作为新任务综合起来,以提前优化SCGN模型参数。样本级图网络使用少量样本之间的关系聚合相似样本,并获得精炼的类级特征。类级图网络旨在缓解新旧类别原型特征之间的语义冲突。SCGN构建了两级图网络,以确保FSCIL中每个Few-shot类别的潜在语义能够得到有效的表示。在三个流行的基准数据集上的实验证明,我们的方法显著优于基线,并以显著优势设置了新的最新成果。