The domain diversities including inconsistent annotation and varied image collection conditions inevitably exist among different facial expression recognition (FER) datasets, which pose an evident challenge for adapting the FER model trained on one dataset to another one. Recent works mainly focus on domain-invariant deep feature learning with adversarial learning mechanism, ignoring the sibling facial action unit (AU) detection task which has obtained great progress. Considering AUs objectively determine facial expressions, this paper proposes an AU-guided unsupervised Domain Adaptive FER (AdaFER) framework to relieve the annotation bias between different FER datasets. In AdaFER, we first leverage an advanced model for AU detection on both source and target domain. Then, we compare the AU results to perform AU-guided annotating, i.e., target faces that own the same AUs with source faces would inherit the labels from source domain. Meanwhile, to achieve domain-invariant compact features, we utilize an AU-guided triplet training which randomly collects anchor-positive-negative triplets on both domains with AUs. We conduct extensive experiments on several popular benchmarks and show that AdaFER achieves state-of-the-art results on all these benchmarks.
翻译:不同面部表达识别(FER)数据集之间不可避免地存在着不同差异,包括前后不一致的说明和不同图像收集条件,这给将一个数据集培训的FER模型改换为另一个数据集带来了明显的挑战。最近的工作主要侧重于领域差异深刻特征学习和对抗性学习机制,忽视了Sibbling面部行动单位(AU)的检测任务,这一任务取得了巨大进展。考虑到AUs客观地确定面部表达方式,本文件建议建立一个非盟指导的、不受监督的Dome Refandive FER(AdaFER)框架,以缓解不同FER数据集之间的认知偏差。在AdaFER中,我们首先利用一个先进的模型在源域和目标域进行非盟检测。然后,我们比较非盟的成果,以进行非盟指导的批注,即拥有同一AU的面部和源面的目标面将继承源域的标签。与此同时,我们利用一个非盟指导的三重培训,随机收集两个域的定位-正反立三重数据。我们用两个域的高级模型与AUs(AFER)进行广泛的实验,这些基准显示各种通用基准。