Active speaker detection (ASD) is a multi-modal task that aims to identify who, if anyone, is speaking from a set of candidates. Current audio-visual approaches for ASD typically rely on visually pre-extracted face tracks (sequences of consecutive face crops) and the respective monaural audio. However, their recall rate is often low as only the visible faces are included in the set of candidates. Monaural audio may successfully detect the presence of speech activity but fails in localizing the speaker due to the lack of spatial cues. Our solution extends the audio front-end using a microphone array. We train an audio convolutional neural network (CNN) in combination with beamforming techniques to regress the speaker's horizontal position directly in the video frames. We propose to generate weak labels using a pre-trained active speaker detector on pre-extracted face tracks. Our pipeline embraces the "student-teacher" paradigm, where a trained "teacher" network is used to produce pseudo-labels visually. The "student" network is an audio network trained to generate the same results. At inference, the student network can independently localize the speaker in the visual frames directly from the audio input. Experimental results on newly collected data prove that our approach significantly outperforms a variety of other baselines as well as the teacher network itself. It results in an excellent speech activity detector too.
翻译:活跃的语音探测( ASD) 是一个多模式的任务, 目的是确定谁( 如果有的话) 是谁从一组候选人中发言。 目前 ASD 的视听方法通常依赖于视觉预发面部( 连续面部作物的序列) 和各自的音响。 但是, 他们的回调率往往很低, 因为候选人集中只包括了可见的面孔。 月度听觉可能成功检测到语言活动的存在, 但由于缺乏空间提示, 我们的解决方案无法将演讲者本地化。 我们的解决方案使用麦克风阵列扩展音频前端。 我们训练了一个音动神经网络, 结合波形技术, 在视频框中直接反移发言者的横向位置。 我们提议使用预先训练过的积极扬声器在预吸引面的音轨中进行微弱的标签。 我们的管道包含“ 学生- 教师” 模式, 由经过训练的“ 教师” 网络用于生成假标签。 “ 教师” 网络可以将声音网络变成一个经过训练的音频网络, 以产生同样的结果。 在图像框中, 将学生网络本身 独立地实验性地标定出其它的图像 。 。 。 在 实验中, 实验中, 实验性地 实验中, 将 将学生网络 将 将 的 以 以 图像 的 的 查看式 查看式 查看式 查看式 的 。