Active speaker detection requires a solid integration of multi-modal cues. While individual modalities can approximate a solution, accurate predictions can only be achieved by explicitly fusing the audio and visual features and modeling their temporal progression. Despite its inherent muti-modal nature, current methods still focus on modeling and fusing short-term audiovisual features for individual speakers, often at frame level. In this paper we present a novel approach to active speaker detection that directly addresses the multi-modal nature of the problem, and provides a straightforward strategy where independent visual features from potential speakers in the scene are assigned to a previously detected speech event. Our experiments show that, an small graph data structure built from a single frame, allows to approximate an instantaneous audio-visual assignment problem. Moreover, the temporal extension of this initial graph achieves a new state-of-the-art on the AVA-ActiveSpeaker dataset with a mAP of 88.8\%.
翻译:主动语音信号的探测需要多式提示的坚实整合。 虽然单个模式可以近似一种解决方案, 准确的预测只能通过明确折叠音频和视觉特征和模拟其时间进展来实现。 尽管其固有的超模式性质, 目前的方法仍然侧重于为个别发言者建模和引信短期视听特征, 通常在框架一级。 在本文中, 我们展示了一种新颖的主动语音信号探测方法, 直接解决了问题的多式性质, 并提供直接的战略, 将现场潜在发言者的独立视觉特征指定给先前发现的语音事件。 我们的实验显示, 从一个框架建起的小图形数据结构可以近似瞬时的视听分配问题。 此外, 初始图形的暂时扩展可以实现AVA- ApioSpeaker数据集的一个新状态, 其MAP为88.8 ⁇ 。