Deep learning has not been routinely employed for semantic segmentation of seabed environment for synthetic aperture sonar (SAS) imagery due to the implicit need of abundant training data such methods necessitate. Abundant training data, specifically pixel-level labels for all images, is usually not available for SAS imagery due to the complex logistics (e.g., diver survey, chase boat, precision position information) needed for obtaining accurate ground-truth. Many hand-crafted feature based algorithms have been proposed to segment SAS in an unsupervised fashion. However, there is still room for improvement as the feature extraction step of these methods is fixed. In this work, we present a new iterative unsupervised algorithm for learning deep features for SAS image segmentation. Our proposed algorithm alternates between clustering superpixels and updating the parameters of a convolutional neural network (CNN) so that the feature extraction for image segmentation can be optimized. We demonstrate the efficacy of our method on a realistic benchmark dataset. Our results show that the performance of our proposed method is considerably better than current state-of-the-art methods in SAS image segmentation.
翻译:对于合成孔径声纳(SAS)图像的海底环境的语系分解,由于隐含需要大量培训数据,因此没有经常地进行深层学习。由于获取准确地面真相所需的复杂后勤(例如潜水员勘测、追逐船、精确位置信息),SAS图像通常没有丰富的培训数据,特别是所有图像的像素级标签,因此SAS图像通常没有这种丰富的培训数据。许多基于手工制作特征的算法被提议以不受监督的方式向SAS分类提出。然而,由于这些方法的特征提取步骤已经固定,仍然有改进的余地。在这项工作中,我们提出了一种新的迭代式的、不受监督的算法,用于学习SAS图像分解的深层特征。我们提议的算法在聚合超级像素和更新卷动神经网络参数之间互换,以便图像分解的特征提取能够优化。我们用一种不受监督的方式向SAS系统图像集展示了我们的方法的功效。我们的结果表明,我们拟议方法的性能大大优于SASAS图像分区中当前状态的方法。