BiSeNet has been proved to be a popular two-stream network for real-time segmentation. However, its principle of adding an extra path to encode spatial information is time-consuming, and the backbones borrowed from pretrained tasks, e.g., image classification, may be inefficient for image segmentation due to the deficiency of task-specific design. To handle these problems, we propose a novel and efficient structure named Short-Term Dense Concatenate network (STDC network) by removing structure redundancy. Specifically, we gradually reduce the dimension of feature maps and use the aggregation of them for image representation, which forms the basic module of STDC network. In the decoder, we propose a Detail Aggregation module by integrating the learning of spatial information into low-level layers in single-stream manner. Finally, the low-level features and deep features are fused to predict the final segmentation results. Extensive experiments on Cityscapes and CamVid dataset demonstrate the effectiveness of our method by achieving promising trade-off between segmentation accuracy and inference speed. On Cityscapes, we achieve 71.9% mIoU on the test set with a speed of 250.4 FPS on NVIDIA GTX 1080Ti, which is 45.2% faster than the latest methods, and achieve 76.8% mIoU with 97.0 FPS while inferring on higher resolution images.
翻译:BiSeNet已被证明是一个流行的实时分割的双流网络。然而,其增加额外路径以编码空间信息的原则耗费时间,而从预先训练的任务(例如图像分类)中借用的骨干由于具体任务设计不足,对图像分割可能效率低下。为了处理这些问题,我们提议了一个名为短期常温凝聚网络(STDC网络)的新颖而有效的结构,通过消除结构冗余。具体地说,我们逐渐减少地貌图的尺寸,并使用这些图集的集成作为STDC网络基本模块的图像表述。在解码器中,我们提出一个详细的聚合模块,将空间信息学习以单流方式纳入低层层。最后,低层特征和深层特征结合在一起,以预测最终分割结果。关于城市景象和CamVid数据集的广泛实验展示了我们的方法的有效性,在分解精度和推断速度之间实现了有希望的取舍。在市景区图中,我们实现了71.9% mIU,在FVSA上以250%的最新分辨率设定,在FVA上实现了71.8%和45VS.