The capacity to learn incrementally from an online stream of data is an envied trait of human learners, as deep neural networks typically suffer from catastrophic forgetting and stability-plasticity dilemma. Several works have previously explored incremental few-shot learning, a task with greater challenges due to data constraint, mostly in classification setting with mild success. In this work, we study the underrepresented task of generative incremental few-shot learning. To effectively handle the inherent challenges of incremental learning and few-shot learning, we propose a novel framework named ConPro that leverages the two-player nature of GANs. Specifically, we design a conservative generator that preserves past knowledge in parameter and compute efficient manner, and a progressive discriminator that learns to reason semantic distances between past and present task samples, minimizing overfitting with few data points and pursuing good forward transfer. We present experiments to validate the effectiveness of ConPro.
翻译:从在线数据流中逐步学习的能力是人类学习者令人羡慕的特征,因为深神经网络通常遭受灾难性的遗忘和稳定-塑料两难困境的困扰。一些著作以前曾探讨过因数据制约而面临更大挑战的渐进式少发学习,大部分是在分类方面,结果不甚成功。在这项工作中,我们研究了基因化增发少发学习这一代表性不足的任务。为了有效处理递增学习和少发学习的内在挑战,我们提议了一个名为ConPro的新框架,利用GANs的两玩家性质。具体地说,我们设计了一个保守的生成器,以参数保存过去的知识并计算有效的方式,以及一个渐进式歧视器,学会解释过去和现在的任务样品之间的语义距离,尽量减少与少数数据点的过度匹配,并追求良好的前期转移。我们提出一个实验来验证CPro的功效。