We present an architecture that is effective for continual learning in an especially demanding setting, where task boundaries do not exist or are unknown, and where classes have to be learned online (with each example presented only once). To obtain good performance under these constraints, while mitigating catastrophic forgetting, we exploit recent advances in contrastive, self-supervised learning, allowing us to use a pre-trained, general purpose image encoder whose weights can be frozen, which precludes forgetting. The pre-trained encoder also greatly simplifies the downstream task of classification, which we solve with an ensemble of very simple classifiers. Collectively, the ensemble exhibits much better performance than any individual classifier, an effect which is amplified through specialisation and competitive selection. We assess the performance of the encoders-and-ensembles architecture on standard continual learning benchmarks, where it outperforms prior state-of-the-art by a large margin on the hardest problems, as well as in less familiar settings where the data distribution changes gradually or the classes are presented one at a time.
翻译:在一个特别困难的环境中,在任务界限不存在或未知的情况下,在班级必须在线学习(每个例子仅提供一次)的情况下,我们展示出一个在任务界限不存在或未知的特殊环境中,能够有效持续学习的架构。 为了在这些制约因素下取得良好的业绩,在减轻灾难性的遗忘的同时,我们利用了对比式的、自我监督的学习的最新进展,使我们能够使用一个经过预先训练的通用图像编码器,其重量可以被冻结,从而防止遗忘。 预先训练的编码器还大大简化了分类的下游任务,我们通过一个非常简单的分类器组合加以解决。 集体而言,共性的表现比任何单个分类器都好得多,这种效果是通过专门化和竞争性选择而放大的。 我们评估了在标准持续学习基准上的编码器和组合结构的性能,在标准持续学习基准上它比先前的状态差很多,在最棘手的问题上,以及数据分配逐渐变化或某个班级的不熟悉的环境里。