Vision Transformers (ViTs) have recently dominated a range of computer vision tasks, yet it suffers from low training data efficiency and inferior local semantic representation capability without appropriate inductive bias. Convolutional neural networks (CNNs) inherently capture regional-aware semantics, inspiring researchers to introduce CNNs back into the architecture of the ViTs to provide desirable inductive bias for ViTs. However, is the locality achieved by the micro-level CNNs embedded in ViTs good enough? In this paper, we investigate the problem by profoundly exploring how the macro architecture of the hybrid CNNs/ViTs enhances the performances of hierarchical ViTs. Particularly, we study the role of token embedding layers, alias convolutional embedding (CE), and systemically reveal how CE injects desirable inductive bias in ViTs. Besides, we apply the optimal CE configuration to 4 recently released state-of-the-art ViTs, effectively boosting the corresponding performances. Finally, a family of efficient hybrid CNNs/ViTs, dubbed CETNets, are released, which may serve as generic vision backbones. Specifically, CETNets achieve 84.9% Top-1 accuracy on ImageNet-1K (training from scratch), 48.6% box mAP on the COCO benchmark, and 51.6% mIoU on the ADE20K, substantially improving the performances of the corresponding state-of-the-art baselines.
翻译:视觉变异器(ViTs)最近主导了一系列计算机视觉任务,然而,它却受到培训数据效率低和本地语义代表能力低的困扰,而没有适当的感化偏差。 进化神经网络(CNNs)内在地捕捉区域觉悟的语义学,激励研究人员将CNN重新引入ViTs架构,为ViTs提供可取的感化偏差。然而,在ViTs中嵌入的微级CNNs所达到的位置是否足够好? 在本文中,我们深入探讨这一问题,探讨混合CNN/ViTs的宏观结构如何增强等级ViTs的性能。特别是,我们研究象征性嵌入层的作用,别名革命嵌入(CE),并系统地揭示CE的投射点如何适合ViTs的感化偏向性偏差。此外,我们把最佳的CEE配置适用于最近发布的4个状态20级的CNNs/ViTs, Dubbed CETs-deal-nets, drientalalalal nets, 84 K-stal ASal ASal ASirimal krifriews.