In class incremental learning (CIL) setting, groups of classes are introduced to a model in each learning phase. The goal is to learn a unified model performant on all the classes observed so far. Given the recent popularity of Vision Transformers (ViTs) in conventional classification settings, an interesting question is to study their continual learning behaviour. In this work, we develop a Debiased Dual Distilled Transformer for CIL dubbed $\textrm{D}^3\textrm{Former}$. The proposed model leverages a hybrid nested ViT design to ensure data efficiency and scalability to small as well as large datasets. In contrast to a recent ViT based CIL approach, our $\textrm{D}^3\textrm{Former}$ does not dynamically expand its architecture when new tasks are learned and remains suitable for a large number of incremental tasks. The improved CIL behaviour of $\textrm{D}^3\textrm{Former}$ owes to two fundamental changes to the ViT design. First, we treat the incremental learning as a long-tail classification problem where the majority samples from new classes vastly outnumber the limited exemplars available for old classes. To avoid the bias against the minority old classes, we propose to dynamically adjust logits to emphasize on retaining the representations relevant to old tasks. Second, we propose to preserve the configuration of spatial attention maps as the learning progresses across tasks. This helps in reducing catastrophic forgetting by constraining the model to retain the attention on the most discriminative regions. $\textrm{D}^3\textrm{Former}$ obtains favorable results on incremental versions of CIFAR-100, MNIST, SVHN, and ImageNet datasets.
翻译:在课堂递增学习( CIL) 设置中, 将班级分组引入每个学习阶段的模型 。 目标是在到目前为止所观察的所有班级中学习一个统一的模型。 鉴于最近在常规分类设置中视野变异器( VIL) 的受欢迎程度, 一个有趣的问题是研究他们的持续学习行为。 在这项工作中, 我们为 CIL 开发一个以$\ textrm{ D\\\\\\\\\ textr{ Former} 命名的脱色双蒸馏变异器。 拟议的模型利用一个混合的 ViT 组合设计确保数据效率和可向小和大数据集扩展。 与最近基于 VIT 的 CIL 方法相比, 我们的 $\ textr{D3\ textr{ Former} 是一个有趣的问题。 在学习新版本中, 我们将累进式学习到最大幅度的版本, 将累进式变式的变式的变式改为老式的变式。