Successful active speaker detection requires a three-stage pipeline: (i) audio-visual encoding for all speakers in the clip, (ii) inter-speaker relation modeling between a reference speaker and the background speakers within each frame, and (iii) temporal modeling for the reference speaker. Each stage of this pipeline plays an important role for the final performance of the created architecture. Based on a series of controlled experiments, this work presents several practical guidelines for audio-visual active speaker detection. Correspondingly, we present a new architecture called ASDNet, which achieves a new state-of-the-art on the AVA-ActiveSpeaker dataset with a mAP of 93.5% outperforming the second best with a large margin of 4.7%. Our code and pretrained models are publicly available.
翻译:成功探测活跃的发言者需要一个三阶段管道:(一) 视频编码,供剪辑中的所有发言者使用,(二) 音频-视频编码,(二) 参考发言者和每个框架内的背景发言者之间的语音关系模型,(三) 参考发言者的时间模型,(三) 该管道的每个阶段对所创建的结构的最后性能起着重要作用。根据一系列受控实验,这项工作为视听活跃的发言者探测提供了若干实用指南。相应地,我们提出了一个名为ASDNet的新结构,它实现了AVA-ApensionSpeaker数据集的新最新艺术,其MAP为93.5%,优于第二最佳,差幅为4.7%。我们的代码和预先训练的模型是公开的。