There still remains an extreme performance gap between Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) when training from scratch on small datasets, which is concluded to the lack of inductive bias. In this paper, we further consider this problem and point out two weaknesses of ViTs in inductive biases, that is, the spatial relevance and diverse channel representation. First, on spatial aspect, objects are locally compact and relevant, thus fine-grained feature needs to be extracted from a token and its neighbors. While the lack of data hinders ViTs to attend the spatial relevance. Second, on channel aspect, representation exhibits diversity on different channels. But the scarce data can not enable ViTs to learn strong enough representation for accurate recognition. To this end, we propose Dynamic Hybrid Vision Transformer (DHVT) as the solution to enhance the two inductive biases. On spatial aspect, we adopt a hybrid structure, in which convolution is integrated into patch embedding and multi-layer perceptron module, forcing the model to capture the token features as well as their neighboring features. On channel aspect, we introduce a dynamic feature aggregation module in MLP and a brand new "head token" design in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other. The fusion of weak channel representation forms a strong enough representation for classification. With this design, we successfully eliminate the performance gap between CNNs and ViTs, and our DHVT achieves a series of state-of-the-art performance with a lightweight model, 85.68% on CIFAR-100 with 22.8M parameters, 82.3% on ImageNet-1K with 24.0M parameters. Code is available at https://github.com/ArieSeirack/DHVT.
翻译:视觉变换器(View Greangers)和革命神经网络(Convolutional Neal Nets)之间在从零到零的小型数据集培训方面仍然存在极端的绩效差距,这最终导致缺乏感官偏差。在本文中,我们进一步审议了这一问题,并指出了ViT在感官偏差方面的两个弱点,即空间相关性和不同的频道代表制。首先,在空间方面,物体是地方上紧凑和相关的,因此需要从一个象征和它的邻居中提取细微的特征。虽然缺乏数据阻碍了ViT在空间相关性上出现。第二,关于频道图像的参数显示方式显示不同频道的多样性。但是,缺少的数据无法让ViT学习足够强的代表性来得到准确的认可。为此,我们建议动态混合变换式变换器(DHT)作为加强两个感官偏差的解决方案。在空间方面,我们采用混合结构,将变动的模型与一个缩放式和多层的透视网模模块结合,将模型和一个强的图像显示它们之间的特征。 在频道设计中,我们将一个驱动式变式的M-hal-hal-hal-hal-hal-hal mode mode modemode mode mode modemodel model model model modemodemodemodemodemodel modemodemodemodemodemodemodemodemodemodemodemodemodemodemodemodemodemodemodal modemodemodemodemodemodaldaldal modal modal modal modal modaldaldal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal modal mo