For safe and efficient operation, mobile robots need to perceive their environment, and in particular, perform tasks such as obstacle detection, localization, and mapping. Although robots are often equipped with microphones and speakers, the audio modality is rarely used for these tasks. Compared to the localization of sound sources, for which many practical solutions exist, algorithms for active echolocation are less developed and often rely on hardware requirements that are out of reach for small robots. We propose an end-to-end pipeline for sound-based localization and mapping that is targeted at, but not limited to, robots equipped with only simple buzzers and low-end microphones. The method is model-based, runs in real time, and requires no prior calibration or training. We successfully test the algorithm on the e-puck robot with its integrated audio hardware, and on the Crazyflie drone, for which we design a reproducible audio extension deck. We achieve centimeter-level wall localization on both platforms when the robots are static during the measurement process. Even in the more challenging setting of a flying drone, we can successfully localize walls, which we demonstrate in a proof-of-concept multi-wall localization and mapping demo.
翻译:为了安全和高效的操作,移动机器人需要感知其环境,特别是需要执行障碍探测、定位和绘图等任务。虽然机器人经常配备麦克风和扬声器,但很少使用音频模式。与声源本地化相比,活跃回声定位的算法并不那么发达,而且往往依赖小型机器人无法达到的硬件要求。我们建议为声音定位和绘图提供端到端管道,其目标包括但不限于仅配备简单的嗡嗡机和低端麦克风的机器人。这种方法以模型为基础,实时运行,不需要事先校准或培训。我们成功地测试电子普克机器人的本地化算法及其综合音频硬件,以及疯狂的无人驾驶飞机的本地化算法,为此我们设计了一个可再生的音频扩展甲。当机器人在测量过程中处于静态时,我们将在两个平台上实现厘米水平的墙本地化定位。即便在更具挑战性的无人机定位中,我们也可以成功进行本地化的墙进行定位,我们可以在本地的校准中演示。