We present a novel method to jointly learn a 3D face parametric model and 3D face reconstruction from diverse sources. Previous methods usually learn 3D face modeling from one kind of source, such as scanned data or in-the-wild images. Although 3D scanned data contain accurate geometric information of face shapes, the capture system is expensive and such datasets usually contain a small number of subjects. On the other hand, in-the-wild face images are easily obtained and there are a large number of facial images. However, facial images do not contain explicit geometric information. In this paper, we propose a method to learn a unified face model from diverse sources. Besides scanned face data and face images, we also utilize a large number of RGB-D images captured with an iPhone X to bridge the gap between the two sources. Experimental results demonstrate that with training data from more sources, we can learn a more powerful face model.
翻译:我们提出了一个从不同来源共同学习 3D 面貌参数模型和 3D 面部重建的新方法。 以往的方法通常从一种来源学习 3D 面部模型, 如扫描数据或瞬间图像。 虽然 3D 扫描数据包含面形的精确几何信息, 但采集系统费用昂贵, 而这类数据集通常包含少量主题。 另一方面, 眼部图像很容易获得, 面部图像数量众多。 然而, 面部图像并不包含明确的几何信息。 在本文中, 我们提出了一个从不同来源学习统一面部模型的方法 。 除了扫描面部数据和脸部图像外, 我们还利用用 iPhone X 采集的大量 RGB- D 图像来缩小两个来源之间的差距 。 实验结果显示, 有了来自更多来源的培训数据, 我们就可以学习一个更强大的面部模型 。