Reconstructing high-fidelity 3D facial texture from a single image is a quite challenging task due to the lack of complete face information and the domain gap between the 3D face and 2D image. Further, obtaining re-renderable 3D faces has become a strongly desired property in many applications, where the term 're-renderable' demands the facial texture to be spatially complete and disentangled with environmental illumination. In this paper, we propose a new self-supervised deep learning framework for reconstructing high-quality and re-renderable facial albedos from single-view images in-the-wild. Our main idea is to first utilize a prior generation module based on the 3DMM proxy model to produce an unwrapped texture and a globally parameterized prior albedo. Then we apply a detail refinement module to synthesize the final texture with both high-frequency details and completeness. To further make facial textures disentangled with illumination, we propose a novel detailed illumination representation which is reconstructed with the detailed albedo together. We also design several novel regularization losses on both the albedo and illumination maps to facilitate the disentanglement of these two factors. Finally, by leveraging a differentiable renderer, each face attribute can be jointly trained in a self-supervised manner without requiring ground-truth facial reflectance. Extensive comparisons and ablation studies on challenging datasets demonstrate that our framework outperforms state-of-the-art approaches.
翻译:从单一图像中重建高纤维 3D 面部纹理是一项相当艰巨的任务,因为缺少完整的面部信息,3D 脸部和 2D 图像之间也存在域差。 此外,获得可重新复制的 3D 面部在许多应用程序中已经成为强烈希望的属性, 在那里, “ 重新复制的” 这个词要求面部纹理在空间上完整, 并与环境污染分解。 在本文件中, 我们提出一个新的自我监督的深层次学习框架, 用于重建质量高、 面部反光的面部反射器, 因为在三维MM 代理模型中, 我们的主要想法是首先使用前一代模块, 来生成一个未包装的纹理, 并在升动之前进行全球参数化。 然后我们应用一个详细精细的修改模块, 将最后的纹理与高频细节和完整性结合起来。 为了进一步使面部纹理与不相混淆, 我们提出一个新的详细明的面部表理表情说明, 正在与详细的平面图像一起重建。 我们还设计了两个有挑战的面面面面面面面图解的自我分析损失,, 我们设计了两个有共同的面面面面面面面部的自我分析, 使两种的平面面部的自我分析损失成为一种新的自我分析, 。