Recognizing human emotion/expressions automatically is quite an expected ability for intelligent robotics, as it can promote better communication and cooperation with humans. Current deep-learning-based algorithms may achieve impressive performance in some lab-controlled environments, but they always fail to recognize the expressions accurately for the uncontrolled in-the-wild situation. Fortunately, facial action units (AU) describe subtle facial behaviors, and they can help distinguish uncertain and ambiguous expressions. In this work, we explore the correlations among the action units and facial expressions, and devise an AU-Expression Knowledge Constrained Representation Learning (AUE-CRL) framework to learn the AU representations without AU annotations and adaptively use representations to facilitate facial expression recognition. Specifically, it leverages AU-expression correlations to guide the learning of the AU classifiers, and thus it can obtain AU representations without incurring any AU annotations. Then, it introduces a knowledge-guided attention mechanism that mines useful AU representations under the constraint of AU-expression correlations. In this way, the framework can capture local discriminative and complementary features to enhance facial representation for facial expression recognition. We conduct experiments on the challenging uncontrolled datasets to demonstrate the superiority of the proposed framework over current state-of-the-art methods. Codes and trained models are available at https://github.com/HCPLab-SYSU/AUE-CRL.
翻译:对智能机器人而言,人类的情感/表达方式自动地认识到人类的情感/表达方式是预期的一种能力,因为它可以促进与人类更好的沟通与合作。目前深学习的算法在某些实验室控制的环境下可能取得令人印象深刻的性能,但总不能准确认识无节制的状态。幸运的是,面部行动单位(AU)描述了微妙的面部行为,可以帮助区分不确定和模糊的表达方式。在这项工作中,我们探索行动单位和面部表达方式之间的相互关系,并设计一个AU-Express知识约束代表学习(AUE-CRL)框架,在没有非盟说明的情况下学习非盟的表述方式,并适应性地使用表达方式,以促进面部表达方式的识别。具体地说,它利用AU-表达方式的关联性来指导非盟分类者的学习,从而可以在不引起任何非盟说明的情况下获得非盟的表达方式。然后,它引入一个知识引导关注机制,在非盟的表情相关性的制约下,利用非盟的表情表达方式,并设计出一个有区别性和互补的表情表情的表情表情表情表。我们在A-BA-CR/CR/CR标准上,我们正在进行有挑战性地试验,以展示现有的数据控制式的表度框架。