Neural Architecture Search (NAS) has shown promising performance in the automatic design of vision transformers (ViT) exceeding 1G FLOPs. However, designing lightweight and low-latency ViT models for diverse mobile devices remains a big challenge. In this work, we propose ElasticViT, a two-stage NAS approach that trains a high-quality ViT supernet over a very large search space that supports a wide range of mobile devices, and then searches an optimal sub-network (subnet) for direct deployment. However, prior supernet training methods that rely on uniform sampling suffer from the gradient conflict issue: the sampled subnets can have vastly different model sizes (e.g., 50M vs. 2G FLOPs), leading to different optimization directions and inferior performance. To address this challenge, we propose two novel sampling techniques: complexity-aware sampling and performance-aware sampling. Complexity-aware sampling limits the FLOPs difference among the subnets sampled across adjacent training steps, while covering different-sized subnets in the search space. Performance-aware sampling further selects subnets that have good accuracy, which can reduce gradient conflicts and improve supernet quality. Our discovered models, ElasticViT models, achieve top-1 accuracy from 67.2% to 80.0% on ImageNet from 60M to 800M FLOPs without extra retraining, outperforming all prior CNNs and ViTs in terms of accuracy and latency. Our tiny and small models are also the first ViT models that surpass state-of-the-art CNNs with significantly lower latency on mobile devices. For instance, ElasticViT-S1 runs 2.62x faster than EfficientNet-B0 with 0.1% higher accuracy.
翻译:神经架构搜索(NAS)已经展示了在自动设计超过1G FLOPs的视觉转换器(ViT)方面的性能。然而,为不同移动设备设计轻量级和低延迟的ViT模型仍然是一个巨大的挑战。在这项工作中,我们提出了ElasticViT,一种两阶段的NAS方法,它通过一个非常大的搜索空间训练高质量的ViT超网络,支持广泛的移动设备,并搜索适用于直接部署的最优子网络(子网)。然而,之前依赖于均匀抽样的超网络训练方法存在梯度冲突问题:采样的子网可能具有迥异的模型大小(例如,50M vs. 2G FLOPs),导致不同的优化方向和较差的性能。为了解决这个挑战,我们提出了两种新的抽样技术:复杂度感知抽样和性能感知抽样。复杂度感知抽样限制了相邻训练步骤中抽样的子网之间FLOPs的差异,同时覆盖搜索空间中的不同大小的子网。性能感知抽样进一步选择具有良好准确性的子网,从而可以减少梯度冲突并提高超网络质量。我们发现的ElasticViT模型在ImageNet上以67.2%至80.0%的top-1准确率,从60M至800M FLOPs,不需要额外的重新训练,优于所有之前的CNN和ViT,无论是在准确性还是延迟方面。我们的微型和小型模型也是第一个在移动设备上具有显着更低延迟并超越最先进的CNN的ViT模型。例如,ElasticViT-S1比EfficientNet-B0运行速度更快2.62倍,准确率更高0.1%。