The recent success of Transformer has provided a new direction to various visual understanding tasks, including video-based facial expression recognition (FER). By modeling visual relations effectively, Transformer has shown its power for describing complicated patterns. However, Transformer still performs unsatisfactorily to notice subtle facial expression movements, because the expression movements of many videos can be too small to extract meaningful spatial-temporal relations and achieve robust performance. To this end, we propose to decompose each video into a series of expression snippets, each of which contains a small number of facial movements, and attempt to augment the Transformer's ability for modeling intra-snippet and inter-snippet visual relations, respectively, obtaining the Expression snippet Transformer (EST). In particular, for intra-snippet modeling, we devise an attention-augmented snippet feature extractor (AA-SFE) to enhance the encoding of subtle facial movements of each snippet by gradually attending to more salient information. In addition, for inter-snippet modeling, we introduce a shuffled snippet order prediction (SSOP) head and a corresponding loss to improve the modeling of subtle motion changes across subsequent snippets by training the Transformer to identify shuffled snippet orders. Extensive experiments on four challenging datasets (i.e., BU-3DFE, MMI, AFEW, and DFEW) demonstrate that our EST is superior to other CNN-based methods, obtaining state-of-the-art performance.
翻译:最近变形器的成功为各种视觉理解任务提供了新的方向,包括基于视频的面部表情识别(FER ) 。 通过有效模拟视觉关系,变形器展示了其描述复杂模式的能力。 然而,变形器仍然表现不满意,无法注意到微妙的面部表达动作,因为许多视频的表达动作可能太小,无法提取有意义的时空关系并实现强劲的性能。为此,我们提议将每部视频分解成一系列表达片片段,每部都包含少量面部运动,并试图增强变形器在模拟内片和偶片间视觉关系方面的能力。不过,变形器仍然表现不满意,无法注意到面部的面部表达动作,因为许多视频的表达动作可能太小,无法提取有意义的空间时空关系,无法实现强的性能。我们提议将每部的微妙面部面部动作混在一起,逐渐关注更突出的信息。此外,对于整形的面部间运动模型预测(SSOP-3) 和幕间视觉视觉视觉视觉关系(ES-MI ) 头部和相应的变形实验(AFDF) 改进了其他运动的变形动作的动作。