Many recent works have reconstructed distinctive 3D face shapes by aggregating shape parameters of the same identity and separating those of different people based on parametric models (e.g., 3D morphable models (3DMMs)). However, despite the high accuracy in the face recognition task using these shape parameters, the visual discrimination of face shapes reconstructed from those parameters is unsatisfactory. The following research question has not been answered in previous works: Do discriminative shape parameters guarantee visual discrimination in represented 3D face shapes? This paper analyzes the relationship between shape parameters and reconstructed shape geometry and proposes a novel shape identity-aware regularization(SIR) loss for shape parameters, aiming at increasing discriminability in both the shape parameter and shape geometry domains. Moreover, to cope with the lack of training data containing both landmark and identity annotations, we propose a network structure and an associated training strategy to leverage mixed data containing either identity or landmark labels. We compare our method with existing methods in terms of the reconstruction error, visual distinguishability, and face recognition accuracy of the shape parameters. Experimental results show that our method outperforms the state-of-the-art methods.
翻译:最近许多作品都通过根据参数模型(例如3D变形模型(3DMM))将同一身份的形状参数汇总并区分不同人的形状参数(例如3D变形模型(3DMM)),重建了与众不同的3D脸形。然而,尽管使用这些形状参数的面部特征识别任务非常精确,但从这些参数中重建的面部形状的视觉区别并不令人满意。在以前的作品中,没有回答以下研究问题:歧视形状参数保证3D面形的视觉区别?本文分析形状参数与重建形状形状几何之间的关系,并提议对形状参数进行新的形状特征规范化(SIR)损失,目的是在形状参数和形状区域中增加差异性。此外,为了应对缺乏包含里程碑和特征说明的培训数据的情况,我们提议了一个网络结构和相关的培训战略,以利用包含身份或标志标签的混合数据。我们将我们的方法与现有方法在形状参数的重建错误、视觉可辨别性和面辨识准确度方面进行比较。实验结果显示,我们的方法超过了状态方法。