It is a challenging task to learn discriminative representation from images and videos, due to large local redundancy and complex global dependency in these visual data. Convolution neural networks (CNNs) and vision transformers (ViTs) have been two dominant frameworks in the past few years. Though CNNs can efficiently decrease local redundancy by convolution within a small neighborhood, the limited receptive field makes it hard to capture global dependency. Alternatively, ViTs can effectively capture long-range dependency via self-attention, while blind similarity comparisons among all the tokens lead to high redundancy. To resolve these problems, we propose a novel Unified transFormer (UniFormer), which can seamlessly integrate the merits of convolution and self-attention in a concise transformer format. Different from the typical transformer blocks, the relation aggregators in our UniFormer block are equipped with local and global token affinity respectively in shallow and deep layers, allowing to tackle both redundancy and dependency for efficient and effective representation learning. Finally, we flexibly stack our UniFormer blocks into a new powerful backbone, and adopt it for various vision tasks from image to video domain, from classification to dense prediction. Without any extra training data, our UniFormer achieves 86.3 top-1 accuracy on ImageNet-1K classification. With only ImageNet-1K pre-training, it can simply achieve state-of-the-art performance in a broad range of downstream tasks, e.g., it obtains 82.9/84.8 top-1 accuracy on Kinetics-400/600, 60.9/71.2 top-1 accuracy on Something-Something V1/V2 video classification tasks, 53.8 box AP and 46.4 mask AP on COCO object detection task, 50.8 mIoU on ADE20K semantic segmentation task, and 77.4 AP on COCO pose estimation task. Code is available at https://github.com/Sense-X/UniFormer.
翻译:由于这些视觉数据中有大量本地冗余和复杂的全球依赖性,从图像和视频中学习歧视性代表是一项具有挑战性的任务。 过去几年来, 演进神经网络(CNNs) 和视觉变异器(VITs) 成为两个主导框架。 虽然CNNs 可以通过小区内混进, 有效地减少本地冗余, 但有限的接收字段使得难以捕捉全球依赖性。 或者, VITs 可以通过自我注意有效地捕捉长距离依赖性, 而所有代号之间的盲点相似性比较导致大量冗余。 为了解决这些问题, 我们提议了一个新的 Unifer Transmer(Uniformer), 它可以在简洁的变异器格式中无缝地整合凝聚和自我注意。 与典型的变异器区块不同, 我们UFI- 1 的变异码和变异性变异的变异的变异体分类, 只能通过自动变形变异的变形/变异的图像任务, 只能通过自动变异的变异的变形变形变形的变形的变形, 级/变形的图像变形的变形, 级, 将的变异的变形的变形的图像任务, 只能 只能 。