Existing work on continual learning (CL) is primarily devoted to developing algorithms for models trained from scratch. Despite their encouraging performance on contrived benchmarks, these algorithms show dramatic performance drops in real-world scenarios. Therefore, this paper advocates the systematic introduction of pre-training to CL, which is a general recipe for transferring knowledge to downstream tasks but is substantially missing in the CL community. Our investigation reveals the multifaceted complexity of exploiting pre-trained models for CL, along three different axes, pre-trained models, CL algorithms, and CL scenarios. Perhaps most intriguingly, improvements in CL algorithms from pre-training are very inconsistent an underperforming algorithm could become competitive and even state-of-the-art when all algorithms start from a pre-trained model. This indicates that the current paradigm, where all CL methods are compared in from-scratch training, is not well reflective of the true CL objective and desired progress. In addition, we make several other important observations, including that CL algorithms that exert less regularization benefit more from a pre-trained model; and that a stronger pre-trained model such as CLIP does not guarantee a better improvement. Based on these findings, we introduce a simple yet effective baseline that employs minimum regularization and leverages the more beneficial pre-trained model, coupled with a two-stage training pipeline. We recommend including this strong baseline in the future development of CL algorithms, due to its demonstrated state-of-the-art performance.
翻译:持续学习(CL)的现有工作主要致力于开发从零开始培训的模式的算法。尽管这些算法在设计基准方面表现令人鼓舞,但这些算法在现实世界情景中表现出了显著的绩效下降。因此,本文件主张系统地引入CL培训前培训,这是将知识转让给下游任务的一种一般方法,但在CL社区中却严重缺失。我们的调查显示,在三个不同的轴线、预先培训的模式、CL算法和CL情景中,利用CL预先培训的模型,以及三个不同的轴线、预先培训的模式、CL算法和CL情景,具有多方面的复杂性。也许最令人感兴趣的是,培训前的CL算法的改进非常不连贯。 在所有算法从预先培训模式开始的时候,所有CLL都有可能变得有竞争力,甚至处于最先进的状态。 这表明,目前的模式,即所有CLL方法都与超潮培训方法相比,并不能很好地反映真正的CL目标和预期的进展。 此外,我们提出了其他一些重要的观察,包括CL算法,从事先培训模式的正规化得益更多; 在一个更坚实的升级的模型上,在C级的升级的升级的升级的模型上,可以保证我们有更坚实的升级的升级的升级的升级的升级的升级的升级的升级的升级的模型。