Inspired by the success of Self-supervised learning (SSL) in learning visual representations from unlabeled data, a few recent works have studied SSL in the context of continual learning (CL), where multiple tasks are learned sequentially, giving rise to a new paradigm, namely self-supervised continual learning (SSCL). It has been shown that the SSCL outperforms supervised continual learning (SCL) as the learned representations are more informative and robust to catastrophic forgetting. However, if not designed intelligently, the training complexity of SSCL may be prohibitively high due to the inherent training cost of SSL. In this work, by investigating the task correlations in SSCL setup first, we discover an interesting phenomenon that, with the SSL-learned background model, the intermediate features are highly correlated between tasks. Based on this new finding, we propose a new SSCL method with layer-wise freezing which progressively freezes partial layers with the highest correlation ratios for each task to improve training computation efficiency and memory efficiency. Extensive experiments across multiple datasets are performed, where our proposed method shows superior performance against the SoTA SSCL methods under various SSL frameworks. For example, compared to LUMP, our method achieves 12\%/14\%/12\% GPU training time reduction, 23\%/26\%/24\% memory reduction, 35\%/34\%/33\% backward FLOPs reduction, and 1.31\%/1.98\%/1.21\% forgetting reduction without accuracy degradation on three datasets, respectively.
翻译:在自我监督学习(SSL)成功从未贴标签的数据中学习视觉表现的激励下,最近的一些工作在不断学习(CL)的背景下对SSL进行了研究,在此过程中,通过连续学习(CL)学习了多项任务,产生了一种新的范式,即自监督不断学习(SSCL)。已经表明,SSLL在连续学习(SCL)方面优于受监督的不断学习(SCL),因为所学的教学演示信息更加丰富,对灾难性的遗忘来说更加有力。然而,如果不明智地设计,SSLLL的培训复杂性可能过高,因为SSLSL的内在培训费用。在这项工作中,通过调查SSCL的设置任务相关性,我们发现了一个有趣的现象,即随着SSL学习的背景模型的形成,中间特征在任务之间高度相关关系。基于这一新发现,我们提出了一个新的SSLLL方法,该方法逐渐冻结部分的层次,而每项任务的最高相关比率是提高培训计算效率和记忆效率。在多个数据集进行广泛的实验,我们提议的方法显示在SOSCL SSL SSL 1/%L 1/23/23A数据框架下减少时间。</s>