3D Morphable Models (3DMMs) demonstrate great potential for reconstructing faithful and animatable 3D facial surfaces from a single image. The facial surface is influenced by the coarse shape, as well as the static detail (e,g., person-specific appearance) and dynamic detail (e.g., expression-driven wrinkles). Previous work struggles to decouple the static and dynamic details through image-level supervision, leading to reconstructions that are not realistic. In this paper, we aim at high-fidelity 3D face reconstruction and propose HiFace to explicitly model the static and dynamic details. Specifically, the static detail is modeled as the linear combination of a displacement basis, while the dynamic detail is modeled as the linear interpolation of two displacement maps with polarized expressions. We exploit several loss functions to jointly learn the coarse shape and fine details with both synthetic and real-world datasets, which enable HiFace to reconstruct high-fidelity 3D shapes with animatable details. Extensive quantitative and qualitative experiments demonstrate that HiFace presents state-of-the-art reconstruction quality and faithfully recovers both the static and dynamic details. Our project page can be found at https://project-hiface.github.io
翻译:三维可塑模型(3DMMs)展示了利用单张图像还原性强、可动的三维人脸表面的巨大潜力。面部表面受粗略形状以及静态细节(例如人特异性外貌)和动态细节(例如表情驱动的皱纹)的影响。之前的研究在通过图像级监督来解耦静态和动态细节方面存在困难,从而导致重建结果不真实。在本文中,我们旨在实现高保真度的三维人脸重建,并提出 HiFace 来显式模拟静态和动态细节。具体来说,将静态细节建模为位移基函数的线性组合,而将动态细节建模为两个具有偏振表情的位移图像的线性插值。我们利用多个损失函数来联合学习粗略形状和细节,并使用合成和真实数据集,使 HiFace 能够重建出具有可动细节的高保真度的三维形状。广泛的定量和定性实验证明,HiFace 拥有最先进的重建质量,并忠实地恢复了静态和动态细节。我们的项目页面网址为 https://project-hiface.github.io