Transformers are successfully applied to computer vision due to their powerful modeling capacity with self-attention. However, the excellent performance of transformers heavily depends on enormous training images. Thus, a data-efficient transformer solution is urgently needed. In this work, we propose an early knowledge distillation framework, which is termed as DearKD, to improve the data efficiency required by transformers. Our DearKD is a two-stage framework that first distills the inductive biases from the early intermediate layers of a CNN and then gives the transformer full play by training without distillation. Further, our DearKD can be readily applied to the extreme data-free case where no real images are available. In this case, we propose a boundary-preserving intra-divergence loss based on DeepInversion to further close the performance gap against the full-data counterpart. Extensive experiments on ImageNet, partial ImageNet, data-free setting and other downstream tasks prove the superiority of DearKD over its baselines and state-of-the-art methods.
翻译:计算机的变异器之所以成功应用到计算机的视觉中,是因为其强大的建模能力具有自省能力。 然而,变异器的出色性能在很大程度上取决于巨大的培训图像。 因此,迫切需要一种数据高效变异器解决方案。 在这项工作中,我们提出了一个早期知识蒸馏框架,称为DearKD, 以提高变异器所需要的数据效率。 我们的DearKD是一个两阶段框架,它首先从CNN早期中间层提取感应偏差,然后通过培训而不进行蒸馏,使变异器全面发挥功能。 此外,我们的DearKD可以很容易地应用到没有真实图像的极端无数据案例上。 在这种情况下,我们提出一个基于 DeepInversion 来进一步缩小变异功能差距以弥补全数据对口的变异器。 在图像网、部分图像网、无数据设置和其他下游任务上进行的广泛实验证明了DearKD优于其基线和最先进的方法。