Human affective behavior analysis has received much attention in human-computer interaction (HCI). In this paper, we introduce our submission to the CVPR 2022 Competition on Affective Behavior Analysis in-the-wild (ABAW). To fully exploit affective knowledge from multiple views, we utilize the multimodal features of spoken words, speech prosody, and facial expression, which are extracted from the video clips in the Aff-Wild2 dataset. Based on these features, we propose a unified transformer-based multimodal framework for Action Unit detection and also expression recognition. Specifically, the static vision feature is first encoded from the current frame image. At the same time, we clip its adjacent frames by a sliding window and extract three kinds of multimodal features from the sequence of images, audio, and text. Then, we introduce a transformer-based fusion module that integrates the static vision features and the dynamic multimodal features. The cross-attention module in the fusion module makes the output integrated features focus on the crucial parts that facilitate the downstream detection tasks. We also leverage some data balancing techniques, data augmentation techniques, and postprocessing methods to further improve the model performance. In the official test of ABAW3 Competition, our model ranks first in the EXPR and AU tracks. The extensive quantitative evaluations, as well as ablation studies on the Aff-Wild2 dataset, prove the effectiveness of our proposed method.
翻译:人类感官行为分析在人与计算机的互动中受到极大关注。 在本文件中,我们向CVPR 2022 竞争竞争委员会介绍了我们提交有关 " 视觉行为分析 " 的论文。为了充分利用多种观点的感知知识,我们使用了从Aff-Wirld2数据集视频剪辑中提取的口语、言语操动和面部表达的多式特征。基于这些特征,我们提议了一个统一的基于变压器的多式联运框架,用于行动单位的检测和表达识别。具体地说,静态的视觉特征首先从当前框架图像图像中编码。与此同时,我们用滑动窗口剪贴近的框框,并从图像、音频和文字序列中提取三种类型的多式联运特征。然后,我们引入一个基于变压器的组合模块,将静态视觉特征和动态多式联运特征整合在一起。基于聚合模块的交叉使用模块,使产出综合特征侧重于便利下游检测任务的关键部分。我们还利用一些数据平衡技术、数据增强技术、数据增强技术和后处理方法,作为A-RVA的模型和A级的大规模测试方法,从而进一步改进A-A级数据评估。