We introduce a new efficient framework, the Unified Context Network (UniCon), for robust active speaker detection (ASD). Traditional methods for ASD usually operate on each candidate's pre-cropped face track separately and do not sufficiently consider the relationships among the candidates. This potentially limits performance, especially in challenging scenarios with low-resolution faces, multiple candidates, etc. Our solution is a novel, unified framework that focuses on jointly modeling multiple types of contextual information: spatial context to indicate the position and scale of each candidate's face, relational context to capture the visual relationships among the candidates and contrast audio-visual affinities with each other, and temporal context to aggregate long-term information and smooth out local uncertainties. Based on such information, our model optimizes all candidates in a unified process for robust and reliable ASD. A thorough ablation study is performed on several challenging ASD benchmarks under different settings. In particular, our method outperforms the state-of-the-art by a large margin of about 15% mean Average Precision (mAP) absolute on two challenging subsets: one with three candidate speakers, and the other with faces smaller than 64 pixels. Together, our UniCon achieves 92.0% mAP on the AVA-ActiveSpeaker validation set, surpassing 90% for the first time on this challenging dataset at the time of submission. Project website: https://unicon-asd.github.io/.
翻译:我们引入了一个新的高效框架,即统一背景网络(Unicon),以强有力地积极辨别演讲者(ASD) 。对于ASD的传统方法,通常在每位候选人预先筛选的面孔上分别操作,不充分考虑候选人之间的关系。这有可能限制业绩,特别是在具有低分辨率、多候选人等具有挑战性的情景中,这有可能限制所有候选人的绩效。我们的解决办法是一个新的统一框架,侧重于联合模拟多种类型的背景资料:空间背景,以显示每个候选人的面孔和规模,显示候选人之间的视觉关系和相互对比的视听亲近关系,以及综合长期信息和平滑地方不确定性的时间背景。基于这些信息,我们的模式优化了所有候选人在稳健可靠的ASDDADA的统一进程中的优势。在不同环境下,对若干具有挑战性的ASDD基准进行了彻底的调整研究。特别是,我们的方法在两个具有挑战性的子集上比目前平均比例约15%的平均值(MAP)绝对的绝对值:一个有3位候选人的发言人,另一个在AVAFER网站上,这个比AVA的图像比例小的90。在AVAL网站上,这个比AGMA的缩。