CutMix is a vital augmentation strategy that determines the performance and generalization ability of vision transformers (ViTs). However, the inconsistency between the mixed images and the corresponding labels harms its efficacy. Existing CutMix variants tackle this problem by generating more consistent mixed images or more precise mixed labels, but inevitably introduce heavy training overhead or require extra information, undermining ease of use. To this end, we propose an efficient and effective Self-Motivated image Mixing method (SMMix), which motivates both image and label enhancement by the model under training itself. Specifically, we propose a max-min attention region mixing approach that enriches the attention-focused objects in the mixed images. Then, we introduce a fine-grained label assignment technique that co-trains the output tokens of mixed images with fine-grained supervision. Moreover, we devise a novel feature consistency constraint to align features from mixed and unmixed images. Due to the subtle designs of the self-motivated paradigm, our SMMix is significant in its smaller training overhead and better performance than other CutMix variants. In particular, SMMix improves the accuracy of DeiT-T/S, CaiT-XXS-24/36, and PVT-T/S/M/L by more than +1% on ImageNet-1k. The generalization capability of our method is also demonstrated on downstream tasks and out-of-distribution datasets. Code of this project is available at https://github.com/ChenMnZ/SMMix.
翻译:CutMix 是一个至关重要的增强战略,它决定了视觉变压器(VITs)的性能和普及能力。 然而,混合图像和相应标签之间的不一致性会损害其效果。 现有的 CutMix 变异体会通过产生更一致的混合图像或更精确的混合标签来解决这个问题, 但不可避免地会引入大量的训练管理或额外信息, 从而破坏使用便利。 为此, 我们提出了一个高效和高效的自动图像混合方法( SMMIix ), 它通过正在培训的模型本身来激励图像和标签的增强。 具体地说, 我们提出了一个最大关注区域, 混合图像和相应标签的混合性能会丰富关注对象的功能。 然后, 我们引入了一种精细的标签分配技术, 以精细的监管方式将混合图像的输出符号连接在一起, 但是, 我们设计了一个新颖的自动图像模型设计, 我们的SMMMIix在较小的培训间接费用和更好的性能比其他 CutMix变异。 特别是, SMMIS 和 Di-LS 演示的精准性, 和 Di- miss 的LS 演示能力比普通/MIS 的L1/DIS 和 LivS 的L1/MIS 和LS 的LS 的LVLVLA/MS 的精准性。