Human emotions can be inferred from facial expressions. However, the annotations of facial expressions are often highly noisy in common emotion coding models, including categorical and dimensional ones. To reduce human labelling effort on multi-task labels, we introduce a new problem of facial emotion recognition with noisy multi-task annotations. For this new problem, we suggest a formulation from the point of joint distribution match view, which aims at learning more reliable correlations among raw facial images and multi-task labels, resulting in the reduction of noise influence. In our formulation, we exploit a new method to enable the emotion prediction and the joint distribution learning in a unified adversarial learning game. Evaluation throughout extensive experiments studies the real setups of the suggested new problem, as well as the clear superiority of the proposed method over the state-of-the-art competing methods on either the synthetic noisy labeled CIFAR-10 or practical noisy multi-task labeled RAF and AffectNet. The code is available at https://github.com/sanweiliti/noisyFER.
翻译:人类情绪可以从面部表情中推断出来。 但是,面部表情的注释在常见的情感编码模型中往往非常吵闹,包括直线和维维的模型。 为了减少人类在多任务标签上的标签努力,我们引入了一个新的面部情绪识别问题,并使用吵闹的多任务说明。对于这一新的问题,我们建议从联合分布匹配的角度来一种配方,其目的是学习原始面部图像和多任务标签之间更可靠的关联,从而减少噪音影响。我们用一种新的方法来进行情感预测和在统一的对立学习游戏中进行联合分发学习。我们在整个广泛的实验中,评价了所建议的新问题的真正设置,以及拟议的方法明显优于最先进的在合成的噪音标签为CIFAR-10或实际的噪音性多任务标记的RAF和AffectNet上的竞争方法。代码可在https://github.com/sanweiliti/noisyFER查阅。 https://github.com/noweilti/noisyFER。