In this paper, we focus on a recently proposed novel task called Audio-Visual Segmentation (AVS), where the fine-grained correspondence between audio stream and image pixels is required to be established. However, learning such correspondence faces two key challenges: (1) audio signals inherently exhibit a high degree of information density, as sounds produced by multiple objects are entangled within the same audio stream; (2) the frequency of audio signals from objects with the same category tends to be similar, which hampers the distinction of target object and consequently leads to ambiguous segmentation results. Toward this end, we propose an Audio Unmixing and Semantic Segmentation Network (AUSS), which encourages unmixing complicated audio signals and distinguishing similar sounds. Technically, our AUSS unmixs the audio signals into a set of audio queries, and interacts them with visual features by masked attention mechanisms. To encourage these audio queries to capture distinctive features embedded within the audio, two self-supervised losses are also introduced as additional supervision at both class and mask levels. Extensive experimental results on the AVSBench benchmark show that our AUSS sets a new state-of-the-art in both single-source and multi-source subsets, demonstrating the effectiveness of our AUSS in bridging the gap between audio and vision modalities.
翻译:暂无翻译