The images and sounds that we perceive undergo subtle but geometrically consistent changes as we rotate our heads. In this paper, we use these cues to solve a problem we call Sound Localization from Motion (SLfM): jointly estimating camera rotation and localizing sound sources. We learn to solve these tasks solely through self-supervision. A visual model predicts camera rotation from a pair of images, while an audio model predicts the direction of sound sources from binaural sounds. We train these models to generate predictions that agree with one another. At test time, the models can be deployed independently. To obtain a feature representation that is well-suited to solving this challenging problem, we also propose a method for learning an audio-visual representation through cross-view binauralization: estimating binaural sound from one view, given images and sound from another. Our model can successfully estimate accurate rotations on both real and synthetic scenes, and localize sound sources with accuracy competitive with state-of-the-art self-supervised approaches. Project site: https://ificl.github.io/SLfM/
翻译:我们所感知到的图像和声音在我们转动头部时会发生微小但几何一致的变化。在本文中,我们使用这些线索来解决一个问题,我们称之为声音定位从运动中(SLfM):联合估计相机旋转和声源本地化。我们仅通过自监督学习来学习解决这些任务。一个视觉模型从一对图像中预测相机旋转,同时一个声音模型从双耳声音中预测声源的方向。我们训练这些模型生成相互一致的预测。在测试时,模型可以独立地部署。为了获得适合解决这个具有挑战性问题的特征表示,我们还提出了一种通过跨视角双耳化学习音频-视觉表示的方法:从一个视角估计双耳声音,给定另一个视角的图像和声音。我们的模型可以在真实和合成场景上成功地估计准确的旋转,并以与最先进的自监督方法相媲美的准确性对声源进行本地化。项目网站: https://ificl.github.io/SLfM/