Recent works have achieved great success in improving the performance of multiple computer vision tasks by capturing features with a high channel number utilizing deep neural networks. However, many channels of extracted features are not discriminative and contain a lot of redundant information. In this paper, we address above issue by introducing the Distance Guided Channel Weighting (DGCW) Module. The DGCW module is constructed in a pixel-wise context extraction manner, which enhances the discriminativeness of features by weighting different channels of each pixel's feature vector when modeling its relationship with other pixels. It can make full use of the high-discriminative information while ignore the low-discriminative information containing in feature maps, as well as capture the long-range dependencies. Furthermore, by incorporating the DGCW module with a baseline segmentation network, we propose the Distance Guided Channel Weighting Network (DGCWNet). We conduct extensive experiments to demonstrate the effectiveness of DGCWNet. In particular, it achieves 81.6% mIoU on Cityscapes with only fine annotated data for training, and also gains satisfactory performance on another two semantic segmentation datasets, i.e. Pascal Context and ADE20K. Code will be available soon at https://github.com/LanyunZhu/DGCWNet.
翻译:最近的工作通过利用深神经网络捕捉具有高频道数的高频道特性的特征,在改进多种计算机视觉任务绩效方面取得了巨大成功;然而,许多提取特征的渠道没有歧视性,而且含有大量多余的信息;在本文件中,我们通过采用远程引导频道加权模块(DGCW)来解决上述问题;DGCW模块是以像素-角度背景提取方式构建的,通过加权每个像素特性矢量的不同渠道在模拟其与其他像素的关系时,提高了特性的差别性能;它可以充分利用高差异信息,同时忽视地貌地图中包含的低差异信息,并捕捉远程依赖性。此外,我们建议采用远程引导频道网络宽度网络提取网络(DGCWNet),我们进行广泛的实验,以展示DGCWNet的不同渠道的功效。特别是,它在市景区图上实现了81.6%的 mIOU,只有附加说明的精细数据,同时忽略地图中包含的低差异性信息,同时捕捉到远程依赖性。此外,我们提议采用远程引导频道网络网络网络网络网络 Wedrod CD/DARC.