With the continuous development of underwater vision technology, more and more remote sensing images could be obtained. In the underwater scene, sonar sensors are currently the most effective remote perception devices, and the sonar images captured by them could provide rich environment information. In order to analyze a certain scene, we often need to merge the sonar images from different periods, various sonar frequencies and distinctive viewpoints. However, the above scenes will bring nonlinear intensity differences to the sonar images, which will make traditional matching methods almost ineffective. This paper proposes a non-linear intensity sonar image matching method that combines local feature points and deep convolution features. This method has two key advantages: (i) we generate data samples related to local feature points based on the self-learning idea; (ii) we use the convolutional neural network (CNN) and Siamese network architecture to measure the similarity of the local position in the sonar image pair. Our method encapsulates the feature extraction and feature matching stage in a model, and directly learns the mapping function from image patch pairs to matching labels, and achieves matching tasks in a near-end-to-end manner. Feature matching experiments are carried out on the sonar images acquired by autonomous underwater vehicle (AUV) in the real underwater environment. Experiment results show that our method has better matching effects and strong robustness.
翻译:随着水下视觉技术的持续开发,可以获取越来越多的遥感图像。在水下场景中,声纳传感器是目前最有效的远程感知设备,由声纳传感器摄取的声纳图像可以提供丰富的环境信息。为了分析某一场景,我们通常需要将不同时期的声纳图像、不同声纳频率和独特观点合并在一起。然而,上述场景将使声纳图像产生非线性强度差异,这将使传统匹配方法几乎无效。本文件建议采用非线性强度声纳图像匹配方法,将本地特征点和深层相近相近特征结合起来。这种方法有两个关键优势:(一) 我们根据自学理念生成与本地特征点相关的数据样本;(二) 为了分析某一场景,我们往往需要将声纳图像在不同时期、不同声纳频率和不同角度的声纳图像合并在一起。我们的方法将特征提取和特征匹配模型的阶段集中在一起,并直接从图像配对标签和深相匹配的声纳图像匹配功能。这一方法有两个关键优势:(一)我们根据自我学习的自学理念生成的当地特征样本,从而更好地进行水下实验。