MetaFormer, the abstracted architecture of Transformer, has been found to play a significant role in achieving competitive performance. In this paper, we further explore the capacity of MetaFormer, again, without focusing on token mixer design: we introduce several baseline models under MetaFormer using the most basic or common mixers, and summarize our observations as follows: (1) MetaFormer ensures solid lower bound of performance. By merely adopting identity mapping as the token mixer, the MetaFormer model, termed IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works well with arbitrary token mixers. When specifying the token mixer as even a random matrix to mix tokens, the resulting model RandFormer yields an accuracy of >81%, outperforming IdentityFormer. Rest assured of MetaFormer's results when new token mixers are adopted. (3) MetaFormer effortlessly offers state-of-the-art results. With just conventional token mixers dated back five years ago, the models instantiated from MetaFormer already beat state of the art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable convolutions as the token mixer, the model termed ConvFormer, which can be regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer sets new record on ImageNet-1K. By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85.5% at 224x224 resolution, under normal supervised training without external data or distillation. In our expedition to probe MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of activation compared with GELU yet achieves better performance. We expect StarReLU to find great potential in MetaFormer-like models alongside other neural networks.
翻译:MetaFormer是变压器的抽象结构,在取得竞争性性能方面被发现能发挥重要作用。在本文中,我们再次探索MetaFormer的能力,而没有侧重于代币混合器的设计:我们在MetaFormer下使用最基本或最常用的混合器推出几个基准模型,并将我们的观察总结归纳如下:(1) MetaFormer确保了实绩的下限。MetaFormer 模式(称为身份Former)在图像Net-1K上实现了超过80%的精确度。(2) MetAFormer 与任意的代币搅拌器运作良好。当指定代币混合器为甚至随机的代币性矩阵时,产生的型号RandFormer将产生正常的精确度 >81%,比身份成熟。在采用新代币混合器时将MetFormereral的结果保证。MetaFormer努力提供了最新的结果。在五年前的常规混合搅拌器中发现,MetFormer的模型已经与艺术的状态相较强。 (aFremoderFormaldForlorld)