Transformer architecture search (TAS) aims to automatically discover efficient vision transformers (ViTs), reducing the need for manual design. Existing TAS methods typically train an over-parameterized network (i.e., a supernet) that encompasses all candidate architectures (i.e., subnets). However, all subnets share the same set of weights, which leads to interference that degrades the smaller subnets severely. We have found that well-trained small subnets can serve as a good foundation for training larger ones. Motivated by this, we propose a progressive training framework, dubbed GrowTAS, that begins with training small subnets and incorporate larger ones gradually. This enables reducing the interference and stabilizing a training process. We also introduce GrowTAS+ that fine-tunes a subset of weights only to further enhance the performance of large subnets. Extensive experiments on ImageNet and several transfer learning benchmarks, including CIFAR-10/100, Flowers, CARS, and INAT-19, demonstrate the effectiveness of our approach over current TAS methods
翻译:Transformer架构搜索旨在自动发现高效的视觉Transformer,减少人工设计需求。现有方法通常训练一个包含所有候选架构的过参数化网络,但所有子网共享同一组权重,导致干扰严重损害小型子网的性能。我们发现训练良好的小型子网可作为训练更大子网的优质基础。受此启发,我们提出了一种渐进式训练框架GrowTAS,从训练小型子网开始,逐步纳入更大子网,从而减少干扰并稳定训练过程。我们还提出了GrowTAS+,仅微调部分权重以进一步提升大型子网性能。在ImageNet及多个迁移学习基准上的大量实验表明,我们的方法在CIFAR-10/100、Flowers、CARS和INAT-19等数据集上均优于当前主流Transformer架构搜索方法。