Semantic segmentation is one of the basic, yet essential scene understanding tasks for an autonomous agent. The recent developments in supervised machine learning and neural networks have enjoyed great success in enhancing the performance of the state-of-the-art techniques for this task. However, their superior performance is highly reliant on the availability of a large-scale annotated dataset. In this paper, we propose a novel fully unsupervised semantic segmentation method, the so-called Information Maximization and Adversarial Regularization Segmentation (InMARS). Inspired by human perception which parses a scene into perceptual groups, rather than analyzing each pixel individually, our proposed approach first partitions an input image into meaningful regions (also known as superpixels). Next, it utilizes Mutual-Information-Maximization followed by an adversarial training strategy to cluster these regions into semantically meaningful classes. To customize an adversarial training scheme for the problem, we incorporate adversarial pixel noise along with spatial perturbations to impose photometrical and geometrical invariance on the deep neural network. Our experiments demonstrate that our method achieves the state-of-the-art performance on two commonly used unsupervised semantic segmentation datasets, COCO-Stuff, and Potsdam.
翻译:语义分解是自主剂的基本但必不可少的现场理解任务之一。 受监督的机器学习和神经网络的最新发展在提高这一任务最先进技术的性能方面取得了巨大成功。 但是,它们的优异性表现高度依赖于大规模附加说明数据集的可用性。 在本文中,我们提议了一种全不受监督的语义分解方法,即所谓的信息最大化和对立常规分解(InMARS)。 受人类感知的启发,将场景分为感知组,而不是对每个像素进行单独分析,我们提议的方法是首先将输入图像分割到有意义的区域(也称为超级像素)。 接着,我们利用相互信息- 信息- 多重化培训战略,然后将这些地区分组成具有语义意义的班级。 要定制一个针对问题的对抗性培训计划,我们将对抗性像素噪声与空间分解结合,将光度和几何异性变在深度神经元网络上,我们拟议的方法首先将输入到有意义的区域(又称为超级像素) 。 我们使用的方法在共同的磁段实验中实现了我们的方法, 在共同的磁段上实现了我们的方法, 实现了一种状态- 。