This paper introduces a method for image semantic segmentation grounded on a novel fusion scheme, which takes place inside a deep convolutional neural network. The main goal of our proposal is to explore object boundary information to improve the overall segmentation performance. Unlike previous works that combine boundary and segmentation features, or those that use boundary information to regularize semantic segmentation, we instead propose a novel approach that embodies boundary information onto segmentation. For that, our semantic segmentation method uses two streams, which are combined through an attention gate, forming an end-to-end Y-model. To the best of our knowledge, ours is the first work to show that boundary detection can improve semantic segmentation when fused through a semantic fusion gate (attention model). We performed an extensive evaluation of our method over public data sets. We found competitive results on all data sets after comparing our proposed model with other twelve state-of-the-art segmenters, considering the same training conditions. Our proposed model achieved the best mIoU on the CityScapes, CamVid, and Pascal Context data sets, and the second best on Mapillary Vistas.
翻译:本文介绍了一种基于新型神经网络深层进化融合的图像语义分解方法。 我们提案的主要目的是探索对象边界信息,以改善整体分解性能。 与以前将边界和分解特征结合起来的工程不同,或者与以前使用边界信息将语义分解正规化的工程不同,我们提议了一种新颖的方法,将边界信息包含在分解上。 为此,我们的语义分解方法使用两种流流,它们通过一个注意门,形成一个端到端的Y型模。 据我们所知,我们的第一个工作是表明,在通过语义分解导门(注意模型)结合时,边界检测能够改善语义分解性。我们对公共数据集的方法进行了广泛的评估。我们考虑到相同的培训条件,在将我们提议的模型与其他12个状态的分解器进行比较之后,发现了所有数据集的竞争性结果。 我们提议的模型在城市卫星群、CamVid和Pascascal背景数据集上达到了最佳的MIOU。