Automatic audio-visual expression recognition can play an important role in communication services such as tele-health, VOIP calls and human-machine interaction. Accuracy of audio-visual expression recognition could benefit from the interplay between the two modalities. However, most audio-visual expression recognition systems, trained in ideal conditions, fail to generalize in real world scenarios where either the audio or visual modality could be missing due to a number of reasons such as limited bandwidth, interactors' orientation, caller initiated muting. This paper studies the performance of a state-of-the art transformer when one of the modalities is missing. We conduct ablation studies to evaluate the model in the absence of either modality. Further, we propose a strategy to randomly ablate visual inputs during training at the clip or frame level to mimic real world scenarios. Results conducted on in-the-wild data, indicate significant generalization in proposed models trained on missing cues, with gains up to 17% for frame level ablations, showing that these training strategies cope better with the loss of input modalities.
翻译:自动视听表达的识别可以在远程保健、VOIP电话和人机互动等通信服务方面发挥重要作用。视听表达的准确性可以从两种模式之间的相互作用中受益。然而,大多数在理想条件下受过培训的视听表达识别系统无法在现实世界情景中一概而论,在现实世界情景中,由于带宽有限、互动者定向、调频器启动变异等诸多原因,视听表达模式可能缺失。本文研究了在缺少一种模式时最先进的变异器的性能。我们进行了对比研究,以便在没有两种模式的情况下对模型进行评估。此外,我们提出了一项战略,在剪辑或框架层面的培训中随机减少视觉投入,以模拟真实世界情景。在虚拟数据上取得的成果表明,对缺失信号所培训的拟议模型有显著的概括性,在框架级别上可达17%,表明这些培训战略更好地应对投入模式的丧失。