Visual-audio navigation (VAN) is attracting more and more attention from the robotic community due to its broad applications, \emph{e.g.}, household robots and rescue robots. In this task, an embodied agent must search for and navigate to the sound source with egocentric visual and audio observations. However, the existing methods are limited in two aspects: 1) poor generalization to unheard sound categories; 2) sample inefficient in training. Focusing on these two problems, we propose a brain-inspired plug-and-play method to learn a semantic-agnostic and spatial-aware representation for generalizable visual-audio navigation. We meticulously design two auxiliary tasks for respectively accelerating learning representations with the above-desired characteristics. With these two auxiliary tasks, the agent learns a spatially-correlated representation of visual and audio inputs that can be applied to work on environments with novel sounds and maps. Experiment results on realistic 3D scenes (Replica and Matterport3D) demonstrate that our method achieves better generalization performance when zero-shot transferred to scenes with unseen maps and unheard sound categories.
翻译:视听导航(VAN)因其在家庭机器人和救援机器人等领域的广泛应用而越来越受到机器人社区的关注。在此任务中,具有落地能力的智能体必须使用自身的视听观测来搜索并导航到声源。然而,现有方法在两个方面存在限制:1)对未听过的声音类别的推广能力差;2)训练时样本效率低。针对这两个问题,我们提出了一种灵感来自于大脑的即插即用(plug-and-play)方法,用于学习语义无关和空间知觉表示,以实现可推广的视听导航。我们精心设计了两个辅助任务,用于加速学习带有上述期望特性的表示。通过这两个辅助任务,智能体学习了视听输入的空间相关表示,可以应用于具有新声音和地图的环境。在逼真的3D场景(Replica和Matterport3D)上的实验结果表明,我们的方法在零样本转移至具有未见过的地图和声音类别的场景时实现了更好的推广性能。