Existing audio-visual event localization (AVE) handles manually trimmed videos with only a single instance in each of them. However, this setting is unrealistic as natural videos often contain numerous audio-visual events with different categories. To better adapt to real-life applications, in this paper we focus on the task of dense-localizing audio-visual events, which aims to jointly localize and recognize all audio-visual events occurring in an untrimmed video. The problem is challenging as it requires fine-grained audio-visual scene and context understanding. To tackle this problem, we introduce the first Untrimmed Audio-Visual (UnAV-100) dataset, which contains 10K untrimmed videos with over 30K audio-visual events. Each video has 2.8 audio-visual events on average, and the events are usually related to each other and might co-occur as in real-life scenes. Next, we formulate the task using a new learning-based framework, which is capable of fully integrating audio and visual modalities to localize audio-visual events with various lengths and capture dependencies between them in a single pass. Extensive experiments demonstrate the effectiveness of our method as well as the significance of multi-scale cross-modal perception and dependency modeling for this task.
翻译:现有的音频-视觉事件定位(AVE)处理具有单个实例的手动修剪视频。然而,这种设置是不现实的,因为自然视频经常包含具有不同类别的 numerous 音频-视觉事件。为了更好地适应实际应用,本文重点关注稠密本地化音频-视觉事件的任务,旨在联合定位和识别出在未剪辑视频中发生的所有音频-视觉事件。这个问题很具挑战性,因为它需要细粒度的音频-视觉场景和上下文理解。为了解决这个问题,我们引入了第一个未剪辑音频-视觉(UnAV-100)数据集,其中包含超过 30K 音频-视觉事件的 10K 个未剪辑视频。每个视频平均有 2.8 个音频-视觉事件,事件通常彼此相关并可能像实际场景中一样同时发生。接下来,我们使用一种新的学习框架来阐述任务,该框架能够完全集成音频和视觉模态,以单次传递定位具有不同长度的音频-视觉事件,并在其中捕获依赖关系。广泛实验表明了我们的方法的有效性以及多尺度跨模态感知和依赖建模对于这个任务的意义。