Continual and multi-task learning are common machine learning approaches to learning from multiple tasks. The existing works in the literature often assume multi-task learning as a sensible performance upper bound for various continual learning algorithms. While this assumption is empirically verified for different continual learning benchmarks, it is not rigorously justified. Moreover, it is imaginable that when learning from multiple tasks, a small subset of these tasks could behave as adversarial tasks reducing the overall learning performance in a multi-task setting. In contrast, continual learning approaches can avoid the performance drop caused by such adversarial tasks to preserve their performance on the rest of the tasks, leading to better performance than a multi-task learner. This paper proposes a novel continual self-supervised learning setting, where each task corresponds to learning an invariant representation for a specific class of data augmentations. In this setting, we show that continual learning often beats multi-task learning on various benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100.
翻译:持续学习和多任务学习是常见的从多重任务中学习的机械学习方法。文献中的现有作品往往假定多任务学习是各种持续学习算法的明智业绩上限。虽然这一假设在经验上被核实为不同的持续学习基准,但并没有严格的理由。此外,可以想象,从多重任务中学习时,这些任务中的一小部分可以表现为对抗性任务,在多任务环境中减少总体学习成绩。相比之下,持续学习方法可以避免由于这种对抗性任务而导致的业绩下降,以保持其在剩余任务中的绩效,导致比多任务学习者更好的业绩。本文提出了一个新的不断自我监督的学习环境,其中每一项任务都相当于为特定数据增强类别学习一种变量。在这个环境中,我们显示,持续学习往往比在各种基准数据集(包括MNIST、CIFAR-10和CIFAR-100)上学习多任务。