Continual learning tries to learn new tasks without forgetting previously learned ones. In reality, most of the existing artificial neural network(ANN) models fail, while humans do the same by remembering previous works throughout their life. Although simply storing all past data can alleviate the problem, it needs large memory and often infeasible in real-world applications where last data access is limited. We hypothesize that the model that learns to solve each task continually has some task-specific properties and some task-invariant characteristics. We propose a hybrid continual learning model that is more suitable in real case scenarios to address the issues that has a task-invariant shared variational autoencoder and T task-specific variational autoencoders. Our model combines generative replay and architectural growth to prevent catastrophic forgetting. We show our hybrid model effectively avoids forgetting and achieves state-of-the-art results on visual continual learning benchmarks such as MNIST, Permuted MNIST(QMNIST), CIFAR100, and miniImageNet datasets. We discuss results on a few more datasets, such as SVHN, Fashion-MNIST, EMNIST, and CIFAR10.
翻译:持续学习试图学习新任务,而不会忘记以前学到的那些任务。在现实中,大多数现有的人工神经网络(ANN)模型都失败了,而人类通过记住以往的作品来做同样的事情。虽然只是储存过去的所有数据可以缓解问题,但需要大量记忆,而在最后数据访问有限的地方,在现实应用中往往不可行。我们假设,不断解决每项任务的模式具有某些特定任务的特点和某些任务差异性特征。我们提出了一个混合持续学习模式,在真实情况下更适合解决任务差异性共享自动编码器和任务特定变异自动编码器的问题。我们的模型将基因复制和建筑增长结合起来,以防止灾难性的遗忘。我们展示我们的混合模型有效地避免忘记并实现关于视觉持续学习基准(如MNIST、 Permuted MNIST(QMNIST)、CIF100和MIS-MIS-MIS、ISMIS、FAIS-FASI)等最新的最新结果。