Underwater robots typically rely on acoustic sensors like sonar to perceive their surroundings. However, these sensors are often inundated with multiple sources and types of noise, which makes using raw data for any meaningful inference with features, objects, or boundary returns very difficult. While several conventional methods of dealing with noise exist, their success rates are unsatisfactory. This paper presents a novel application of conditional Generative Adversarial Networks (cGANs) to train a model to produce noise-free sonar images, outperforming several conventional filtering methods. Estimating free space is crucial for autonomous robots performing active exploration and mapping. Thus, we apply our approach to the task of underwater occupancy mapping and show superior free and occupied space inference when compared to conventional methods.
翻译:水下机器人通常依靠声纳等声传感器来观察周围环境。然而,这些传感器往往被多种来源和类型的噪音淹没,这使得利用原始数据对任何特征、物体或边界的有意义的推论都很难返回。虽然存在一些处理噪音的常规方法,但成功率并不令人满意。本文介绍了一种新型应用,即有条件的基因反转网络(cGANs)来训练一种模型来制作无噪音声纳图像,比几个常规过滤方法要好。对自主机器人进行积极探索和绘图,估算自由空间至关重要。因此,我们运用了我们的方法来完成水下占用测绘任务,并展示出与常规方法相比更自由、更空空的空间推断。