Most recent semantic segmentation methods adopt a fully-convolutional network (FCN) with an encoder-decoder architecture. The encoder progressively reduces the spatial resolution and learns more abstract/semantic visual concepts with larger receptive fields. Since context modeling is critical for segmentation, the latest efforts have been focused on increasing the receptive field, through either dilated/atrous convolutions or inserting attention modules. However, the encoder-decoder based FCN architecture remains unchanged. In this paper, we aim to provide an alternative perspective by treating semantic segmentation as a sequence-to-sequence prediction task. Specifically, we deploy a pure transformer (ie, without convolution and resolution reduction) to encode an image as a sequence of patches. With the global context modeled in every layer of the transformer, this encoder can be combined with a simple decoder to provide a powerful segmentation model, termed SEgmentation TRansformer (SETR). Extensive experiments show that SETR achieves new state of the art on ADE20K (50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on Cityscapes. Particularly, we achieve the first (44.42% mIoU) position in the highly competitive ADE20K test server leaderboard.
翻译:最新的语义分解方法采用了完全进化的网络(FCN), 并配有编码器解码器结构。 编码器逐渐减少空间分辨率, 并用更大的可接收字段学习更抽象/ 语义的视觉概念。 由于背景模型对于分解至关重要, 最近的努力侧重于通过放大/ 突变或插入注意模块来增加可接收字段。 但是, 以 FCN 结构为基础的编码器解码器结构保持不变。 在本文中, 我们的目标是提供另一种观点, 将语义分解作为序列到序列的预测任务。 具体地说, 我们部署一个纯的变异器( 即, 不变异和分辨率减少) 来将图像编码为补补码序列。 随着在变异器的每一层中建模的全球环境, 这个编码器可以与一个简单的解码器结合起来, 以提供一个强大的分解模型, 称为 SEgmentationU20Exexexexe( SETR) 。 广泛的实验显示, SETRTR在ADE20K( 50.28% 和 Excoreal ial i) Excial 555 I 和我们 的MISal 528 的MI) 上, 5255, 5, 5, 5, 5, 和 和 528 518 的 mI 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, m.I, 5, m.I, 5, m.I, 5, 5, 5, 5, 5, m.I, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,