In previous deep-learning-based methods, semantic segmentation has been regarded as a static or dynamic per-pixel classification task, \textit{i.e.,} classify each pixel representation to a specific category. However, these methods only focus on learning better pixel representations or classification kernels while ignoring the structural information of objects, which is critical to human decision-making mechanism. In this paper, we present a new paradigm for semantic segmentation, named structure-aware extraction. Specifically, it generates the segmentation results via the interactions between a set of learned structure tokens and the image feature, which aims to progressively extract the structural information of each category from the feature. Extensive experiments show that our StructToken outperforms the state-of-the-art on three widely-used benchmarks, including ADE20K, Cityscapes, and COCO-Stuff-10K.
翻译:在以往的深度学习方法中,语义分割被视为静态或动态逐像素分类任务,即将每个像素表示分类到特定的类别中。然而,这些方法只关注学习更好的像素表示或分类内核,而忽略了对象的结构信息,这是人类决策机制的关键因素。在本文中,我们提出了一种新的语义分割范例,称为结构感知提取。具体而言,它通过一组学习到的结构记号与图像特征之间的交互生成分割结果,旨在从特征中逐步提取每个类别的结构信息。广泛的实验表明,我们的StructToken在包括ADE20K,Cityscapes和COCO-Stuff-10K在内的三个广泛使用的基准测试中优于现有最先进的技术。