Neural networks are known to suffer from catastrophic forgetting when trained on sequential datasets. While there have been numerous attempts to solve this problem in large-scale supervised classification, little has been done to overcome catastrophic forgetting in few-shot classification problems. We demonstrate that the popular gradient-based model-agnostic meta-learning algorithm (MAML) indeed suffers from catastrophic forgetting and introduce a Bayesian online meta-learning framework that tackles this problem. Our framework utilises Bayesian online learning and meta-learning along with Laplace approximation and variational inference to overcome catastrophic forgetting in few-shot classification problems. The experimental evaluations demonstrate that our framework can effectively achieve this goal in comparison with various baselines. As an additional utility, we also demonstrate empirically that our framework is capable of meta-learning on sequentially arriving few-shot tasks from a stationary task distribution.
翻译:已知神经网络在接受连续数据集培训时会遭受灾难性的遗忘。虽然在大规模监督分类中曾多次尝试解决这一问题,但在克服几小片分类问题的灾难性遗忘方面却做得很少。我们证明流行的基于梯度的模型-不可知元学习算法(MAML)确实遭受灾难性的遗忘,并引入了解决该问题的巴耶斯在线元学习框架。我们的框架利用巴伊西亚在线学习和元学习以及Laplace近似和变异推论来克服几小片分类问题的灾难性遗忘。实验性评估表明,与各种基线相比,我们的框架能够有效地实现这一目标。作为额外的效用,我们还从经验上表明,我们的框架能够从固定任务分布中按顺序完成的几小片任务上进行元学习。