Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformer to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent networks. Given its high performance and no need for human-defined inductive bias, transformer is receiving more and more attention from the computer vision community. In this paper, we review these visual transformer models by categorizing them in different tasks and analyzing their advantages and disadvantages. The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also take a brief look at the self-attention mechanism in computer vision, as it is the base component in transformer. Furthermore, we include efficient transformer methods for pushing transformer into real device-based applications. Toward the end of this paper, we discuss the challenges and provide several further research directions for visual transformers.
翻译:首先应用于自然语言处理领域的变压器是一种主要基于自我注意机制的深层神经网络。 由于其强大的代表能力, 研究人员正在研究如何将变压器应用到计算机的视觉任务中。 在各种视觉基准中, 变压器模型与其他类型的网络类似或优于其他类型的网络, 如变压器和经常性网络。 鉴于其高性能和不需要人为定义的感应偏差, 变压器正日益受到计算机视觉界的注意。 在本文中, 我们审查这些视觉变压器模型, 将其分为不同的任务并分析其优缺点。 我们探索的主要类别包括主干网、 高/ 中度视觉、 低度视觉和视频处理。 我们还简要审视了计算机视觉中的自控机制, 因为它是变压器中的基本组成部分。 此外, 我们还包括将变压器推向实际设备应用的有效变压器方法。 到本文结尾, 我们讨论挑战, 并为视觉变压器提供若干进一步的研究方向。