How to extract effective expression representations that invariant to the identity-specific attributes is a long-lasting problem for facial expression recognition (FER). Most of the previous methods process the RGB images of a sequence, while we argue that the off-the-shelf and valuable expression-related muscle movement is already embedded in the compression format. In this paper, we target to explore the inter-subject variations eliminated facial expression representation in the compressed video domain. In the up to two orders of magnitude compressed domain, we can explicitly infer the expression from the residual frames and possibly extract identity factors from the I frame with a pre-trained face recognition network. By enforcing the marginal independence of them, the expression feature is expected to be purer for the expression and be robust to identity shifts. Specifically, we propose a novel collaborative min-min game for mutual information (MI) minimization in latent space. We do not need the identity label or multiple expression samples from the same person for identity elimination. Moreover, when the apex frame is annotated in the dataset, the complementary constraint can be further added to regularize the feature-level game. In testing, only the compressed residual frames are required to achieve expression prediction. Our solution can achieve comparable or better performance than the recent decoded image-based methods on the typical FER benchmarks with about 3 times faster inference.
翻译:如何从有效的表达方式中找到对特定特性特性的变量是一个长期的面部表达辨识问题。 大部分先前的方法处理一个序列的 RGB 图像, 而我们争辩说, 现成和有价值的与表达有关的肌肉运动已经嵌入压缩格式中。 在本文中, 我们的目标是探索压缩视频域中消除面部表达的跨主题变异。 在最多两个数量级的压缩域中, 我们可以明确从剩余框中推断表达, 并可能从I框中提取身份因子, 使用预先训练的面部辨识网络 。 通过强制实施其边际独立性, 表达功能预计将更纯洁一些, 并且对身份转换更有力。 具体地说, 我们提出一个新的协作微分游戏, 用于在潜在空间中最大限度地减少相互信息。 我们不需要同一人的身份标签或多个表达样本来消除身份。 此外, 在数据集中附加注释时, 补充性制约可以进一步规范地段游戏 。 在测试中, 只有压缩后边框需要更精确的表达方式, 才能比典型的预测更快。 我们的解决方案可以达到比标准更快的版本。