Recent progress in deep learning has enabled many advances in sound separation and visual scene understanding. However, extracting sound sources which are apparent in natural videos remains an open problem. In this work, we present AudioScope, a novel audio-visual sound separation framework that can be trained without supervision to isolate on-screen sound sources from real in-the-wild videos. Prior audio-visual separation work assumed artificial limitations on the domain of sound classes (e.g., to speech or music), constrained the number of sources, and required strong sound separation or visual segmentation labels. AudioScope overcomes these limitations, operating on an open domain of sounds, with variable numbers of sources, and without labels or prior visual segmentation. The training procedure for AudioScope uses mixture invariant training (MixIT) to separate synthetic mixtures of mixtures (MoMs) into individual sources, where noisy labels for mixtures are provided by an unsupervised audio-visual coincidence model. Using the noisy labels, along with attention between video and audio features, AudioScope learns to identify audio-visual similarity and to suppress off-screen sounds. We demonstrate the effectiveness of our approach using a dataset of video clips extracted from open-domain YFCC100m video data. This dataset contains a wide diversity of sound classes recorded in unconstrained conditions, making the application of previous methods unsuitable. For evaluation and semi-supervised experiments, we collected human labels for presence of on-screen and off-screen sounds on a small subset of clips.
翻译:最近深层学习的进展使得在声音分离和视觉场景理解方面取得了许多进展。然而,在自然视频中显露出来的声音源仍是一个公开的问题。在这项工作中,我们介绍AudioScope,这是一个新颖的视听声音分离框架,可以在没有监督的情况下进行培训,将屏幕上的声音源与真实的视觉视频隔开。先前的视听分离工作假定在声音类领域(例如语言或音乐)存在人为限制,限制源数,需要强有力的声音分离或视觉分割标签。音频系统克服了这些限制,在声音的开放域内运作,有不同来源,没有标签或先前的视觉分割。对于音频系统的培训程序,可以在没有监督的情况下,把屏幕上的音频和声音分离出来。我们利用混合的混合培训程序,将混合混合物的合成混合物分离到单个来源,通过一个不超强的视听巧合模型为混合物贴上噪音标签。使用噪音标签,同时注意视频和音频特性,通过视频软件学习如何识别视听的相似性,并在屏幕外的音频场场场场场外的音频场场外的音频分析中,用一个视频级数据解的变式数据序列来记录。我们用以前摄像学数据序列的变的变的变的变式数据系统,在复制的变的变现的变的变的变的变式数据系统上的数据序列上展示了数据。