Self-attention and channel attention, modelling thesemantic interdependencies in spatial and channel dimensionsrespectively, have recently been widely used for semantic seg-mentation. However, computing spatial-attention and channelattention separately and then fusing them directly can causeconflicting feature representations. In this paper, we proposethe Channelized Axial Attention (CAA) to seamlessly integratechannel attention and axial attention with reduced computationalcomplexity. After computing axial attention maps, we propose tochannelize the intermediate results obtained from the transposeddot-product so that the channel importance of each axial repre-sentation is optimized across the whole receptive field. We furtherdevelop grouped vectorization, which allows our model to be runwith very little memory consumption at a speed comparableto the full vectorization. Comparative experiments conductedon multiple benchmark datasets, including Cityscapes, PASCALContext and COCO-Stuff, demonstrate that our CAA not onlyrequires much less computation resources compared with otherdual attention models such as DANet, but also outperformsthe state-of-the-art ResNet-101-based segmentation models on alltested datasets.
翻译:自我关注和引导关注,在空间和频道维度上建模这些多语义的相互依存性,最近被广泛用于语义分隔。然而,将空间关注和信道关注分别计算和循环关注,然后直接粉碎它们,可能会造成特征表达的冲突。在本文中,我们建议“循环轴心(CAAA) ”, 以无缝地整合通道关注和同步关注,同时降低计算兼容性。在计算同步关注地图之后,我们建议将从转口产品中获得的中间结果进行循环处理,这样,每个轴心回馈的频道重要性在整个可接收字段中得到优化。我们进一步开发了分组矢量化模式,使我们的模型的记忆消耗量很少,其速度与完全矢量化相仿。在多个基准数据集上进行的比较实验,包括城市景象、PACALText和COCO-Stuffe, 表明我们的CAAAAA不仅比基于DNet的其他关注模型要少得多的计算资源,而且还超越了ASP-101级数据库。