Transformer, an attention-based encoder-decoder architecture, has revolutionized the field of natural language processing. Inspired by this significant achievement, some pioneering works have recently been done on adapting Transformerliked architectures to Computer Vision (CV) fields, which have demonstrated their effectiveness on various CV tasks. Relying on competitive modeling capability, visual Transformers have achieved impressive performance on multiple benchmarks such as ImageNet, COCO, and ADE20k as compared with modern Convolution Neural Networks (CNN). In this paper, we have provided a comprehensive review of over one hundred different visual Transformers for three fundamental CV tasks (classification, detection, and segmentation), where a taxonomy is proposed to organize these methods according to their motivations, structures, and usage scenarios. Because of the differences in training settings and oriented tasks, we have also evaluated these methods on different configurations for easy and intuitive comparison instead of only various benchmarks. Furthermore, we have revealed a series of essential but unexploited aspects that may empower Transformer to stand out from numerous architectures, e.g., slack high-level semantic embeddings to bridge the gap between visual and sequential Transformers. Finally, three promising future research directions are suggested for further investment.
翻译:在这项重大成就的启发下,最近开展了一些开创性工作,使变异型结构适应计算机视野(CV)领域,这显示了其在各种CV任务上的有效性。 依靠竞争性建模能力,视觉变异器在图像网络、COCO和ADE20k等多种基准上取得了令人印象深刻的成绩,与现代神经神经网络相比(CNN ) 。 在本文件中,我们对三种基本 CV 任务(分类、检测和分解)的100多个不同的视觉变异器进行了全面审查,其中提出了根据动机、结构和使用设想来组织这些方法的分类学。由于培训环境和定向任务的差异,我们还对这些不同配置的方法进行了简单和直观的比较,而不是仅仅对各种基准进行了比较。此外,我们披露了一系列重要但未经探索的方面,这些方面可能使变异器有能力从许多结构(例如,低档高档、高档、高档、未来连续三级研究)中走出更多差距。