We propose a parametric model that maps free-view images into a vector space of coded facial shape, expression and appearance with a neural radiance field, namely Morphable Facial NeRF. Specifically, MoFaNeRF takes the coded facial shape, expression and appearance along with space coordinate and view direction as input to an MLP, and outputs the radiance of the space point for photo-realistic image synthesis. Compared with conventional 3D morphable models (3DMM), MoFaNeRF shows superiority in directly synthesizing photo-realistic facial details even for eyes, mouths, and beards. Also, continuous face morphing can be easily achieved by interpolating the input shape, expression and appearance codes. By introducing identity-specific modulation and texture encoder, our model synthesizes accurate photometric details and shows strong representation ability. Our model shows strong ability on multiple applications including image-based fitting, random generation, face rigging, face editing, and novel view synthesis. Experiments show that our method achieves higher representation ability than previous parametric models, and achieves competitive performance in several applications. To the best of our knowledge, our work is the first facial parametric model built upon a neural radiance field that can be used in fitting, generation and manipulation. The code and data is available at https://github.com/zhuhao-nju/mofanerf.
翻译:我们提出一个参数模型,将自由视觉图像映射成一个由神经光亮场(即:可塑面形、表情和外观)组成的载体空间。具体地说,MoFANERF将编码面部形状、表情和外观与空间坐标和方向作为MLP的输入,并将空间点的亮度转化为照片现实图像合成。与传统的3D可变式模型(3DMM)相比,MoFANERRF展示了直接合成光视现实面部细节的优势,即使对眼睛、嘴部和胡子而言也是如此。此外,通过对输入形状、表情和外观代码进行内插,可以很容易地实现持续面貌的变形。通过引入特定身份的调节和纹理编码,我们的模型合成了准确的光度细节,并展示了很强的表达能力。我们的模型展示了多种应用的强大能力,包括基于图像的匹配、随机生成、面部调整、面部编辑和新观点合成。实验显示,我们的方法比先前的模拟模型具有更高的代表能力,并在若干次度模型中实现竞争性性运行。