In this work we tackle the task of video-based audio-visual emotion recognition, within the premises of the 2nd Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW). Standard methodologies that rely solely on the extraction of facial features often fall short of accurate emotion prediction in cases where the aforementioned source of affective information is inaccessible due to head/body orientation, low resolution and poor illumination. We aspire to alleviate this problem by leveraging bodily as well as contextual features, as part of a broader emotion recognition framework. A standard CNN-RNN cascade constitutes the backbone of our proposed model for sequence-to-sequence (seq2seq) learning. Apart from learning through the \textit{RGB} input modality, we construct an aural stream which operates on sequences of extracted mel-spectrograms. Our extensive experiments on the challenging and newly assembled Affect-in-the-wild-2 (Aff-Wild2) dataset verify the superiority of our methods over existing approaches, while by properly incorporating all of the aforementioned modules in a network ensemble, we manage to surpass the previous best published recognition scores, in the official validation set. All the code was implemented using PyTorch\footnote{\url{https://pytorch.org/}} and is publicly available\footnote{\url{https://github.com/PanosAntoniadis/NTUA-ABAW2021}}.
翻译:在这项工作中,我们在第二届讲习班和竞争模拟分析竞赛(ABAW)的场地内,处理基于视频的视听情感识别任务:完全依靠提取面部特征的标准方法往往没有准确的情感预测,因为由于头部/身体取向、分辨率低和光度差,上述感官信息来源因头部/身体取向、分辨率低和光度差而无法获取。我们渴望通过利用身体和背景特征,作为更广泛的情感识别框架的一部分,来缓解这一问题。标准CNN-RNNN级联赛是我们提议的从序列到序列分析(seq2seq)学习模式的支柱。除了通过“textit{RGB} 输入模式学习之外,我们还在通过提取的Mel-spectrographs序列运作一个音乐流。我们对挑战性和新组装Affffect-in-the-wild-2(Aff-Wild2)的数据集,验证我们的方法优于现有方法,同时将上述模块正确纳入网络的Sentemememb-Arusireal-artiumrus_arto protal proty_to to All the provest recialst surst regnistration.