In this paper, we investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario, with special focus on how to efficiently distill the knowledge of its crucial self-attention mechanism (SAM). Our work takes an initial step towards a surgical investigation of SAM for designing coherent continual learning methods in ViTs. We first carry out an evaluation of established continual learning regularization techniques. We then examine the effect of regularization when applied to two key enablers of SAM: (a) the contextualized embedding layers, for their ability to capture well-scaled representations with respect to the values, and (b) the prescaled attention maps, for carrying value-independent global contextual information. We depict the perks of each distilling strategy on two image recognition benchmarks (CIFAR100 and ImageNet-32) -- while (a) leads to a better overall accuracy, (b) helps enhance the rigidity by maintaining competitive performances. Furthermore, we identify the limitation imposed by the symmetric nature of regularization losses. To alleviate this, we propose an asymmetric variant and apply it to the pooled output distillation (POD) loss adapted for ViTs. Our experiments confirm that introducing asymmetry to POD boosts its plasticity while retaining stability across (a) and (b). Moreover, we acknowledge low forgetting measures for all the compared methods, indicating that ViTs might be naturally inclined continual learner
翻译:在本文中,我们调查了视觉变异器(VIT)不断学习具有挑战性的无创意情景,特别侧重于如何有效地提炼对其关键自省机制(SAM)的了解。我们的工作迈出了第一步,对SAM进行外科调查,以设计VIT的连贯持续学习方法。我们首先对既定的不断学习正规化技术进行评估。然后我们研究正规化在应用SAM的两个关键促成因素时的影响:(a) 环境化嵌入层,因为它们有能力捕捉价值方面的大规模表现,以及(b) 预测的注意图,以传播依赖价值的全球背景信息。我们描述了SAM在设计两个图像识别基准(CIFAR100和图象网-32)方面的每项精选战略,同时(a) 提高总体准确性,(b) 通过保持竞争性表现来帮助提高僵硬性。此外,我们确定了正规化损失的对称性性质所施加的限制。为了减轻这种限制,我们提出了不对称的变式,并将其应用到集成型产出的精度分化模型(PPOD),同时确认其稳定性变现性研究方法。