Two factors have proven to be very important to the performance of semantic segmentation models: global context and multi-level semantics. However, generating features that capture both factors always leads to high computational complexity, which is problematic in real-time scenarios. In this paper, we propose a new model, called Attention-Augmented Network (AttaNet), to capture both global context and multilevel semantics while keeping the efficiency high. AttaNet consists of two primary modules: Strip Attention Module (SAM) and Attention Fusion Module (AFM). Viewing that in challenging images with low segmentation accuracy, there are a significantly larger amount of vertical strip areas than horizontal ones, SAM utilizes a striping operation to reduce the complexity of encoding global context in the vertical direction drastically while keeping most of contextual information, compared to the non-local approaches. Moreover, AFM follows a cross-level aggregation strategy to limit the computation, and adopts an attention strategy to weight the importance of different levels of features at each pixel when fusing them, obtaining an efficient multi-level representation. We have conducted extensive experiments on two semantic segmentation benchmarks, and our network achieves different levels of speed/accuracy trade-offs on Cityscapes, e.g., 71 FPS/79.9% mIoU, 130 FPS/78.5% mIoU, and 180 FPS/70.1% mIoU, and leading performance on ADE20K as well.
翻译:事实证明,对于语义分解模型的性能来说,有两个因素非常重要:全球背景和多层次语义学。但是,产生能捕捉这两种因素的特征,总是导致计算复杂性高,在实时情景中有问题。在本文件中,我们提出了一个新的模式,称为“注意增强网络(ATTNet)”,既能捕捉全球背景和多层次语义学,同时又能保持高效率。AttaNet由两个主要模块组成:加沙地带注意模块(SAM)和注意力融合模块(AFM)。考虑到在具有挑战性的图像中,垂直条状区域比水平高得多,因此,SAM利用一个分解操作来降低垂直方向全球环境编码的复杂性,同时与非本地方法相比,保持大多数背景信息。此外,AFMM遵循一个跨层次的汇总战略,以限制计算,并采用关注战略,以权衡每个像素分级的地段的重要性,并获得高效的多层次代表。我们已经对两个精密的语义学区段段进行了实验,FPSE-SER5 和FSI 130 Climal-cal-Cal-Cal 速度和网络进行了不同的水平。