Classical deep neural networks are limited in their ability to learn from emerging streams of training data. When trained sequentially on new or evolving tasks, their performance degrades sharply, making them inappropriate in real-world use cases. Existing methods tackle it by either storing old data samples or only updating a parameter set of DNNs, which, however, demands a large memory budget or spoils the flexibility of models to learn the incremented class distribution. In this paper, we shed light on an on-call transfer set to provide past experiences whenever a new class arises in the data stream. In particular, we propose a Zero-Shot Incremental Learning not only to replay past experiences the model has learned but also to perform this in a zero-shot manner. Towards this end, we introduced a memory recovery paradigm in which we query the network to synthesize past exemplars whenever a new task (class) emerges. Thus, our method needs no fixed-sized memory, besides calls the proposed memory recovery paradigm to provide past exemplars, named a transfer set in order to mitigate catastrophically forgetting the former classes. Moreover, in contrast with recently proposed methods, the suggested paradigm does not desire a parallel architecture since it only relies on the learner network. Compared to the state-of-the-art data techniques without buffering past data samples, ZS-IL demonstrates significantly better performance on the well-known datasets (CIFAR-10, Tiny-ImageNet) in both Task-IL and Class-IL settings.
翻译:古老的深层神经网络从新兴培训数据流中学习的能力有限。 在连续培训新任务或不断演变的任务时,它们的性能会急剧下降,使其在现实世界中不适宜使用案例。 现有的方法通过存储旧数据样本或仅更新一组DNN的参数来解决这个问题, 然而,这需要大量的记忆预算, 或破坏模型在学习增量类分布方面的灵活性。 在本文中, 我们展示一个即时传输集, 以便在数据流中出现新类别时提供过去的经验。 特别是, 我们提议一个零热递增学习系统, 不仅是为了重现模型所学到的过去经验, 而且还要以零发方式执行。 朝着这个目的, 我们引入了一个记忆恢复模式, 在新的任务( 类) 出现时, 我们查询网络如何综合过去的增量模型。 因此, 我们的方法不需要固定规模的记忆, 除了呼吁拟议的记忆恢复模式来提供过去的Explail, 命名一个转移集, 以便减轻过去的班级的灾难性。 此外, 与最近提议的方法相比, 与最近提议的平级的平级网络相比, 比较的平级的平级, 建的平级 建的平比 的平级的平级的平级的平局, 只能 显示过去数据结构的平局 。