We propose a parametric model that maps free-view images into a vector space of coded facial shape, expression and appearance using a neural radiance field, namely Morphable Facial NeRF. Specifically, MoFaNeRF takes the coded facial shape, expression and appearance along with space coordinate and view direction as input to an MLP, and outputs the radiance of the space point for photo-realistic image synthesis. Compared with conventional 3D morphable models (3DMM), MoFaNeRF shows superiority in directly synthesizing photo-realistic facial details even for eyes, mouths, and beards. Also, continuous face morphing can be easily achieved by interpolating the input shape, expression and appearance codes. By introducing identity-specific modulation and texture encoder, our model synthesizes accurate photometric details and shows strong representation ability. Our model shows strong ability on multiple applications including image-based fitting, random generation, face rigging, face editing, and novel view synthesis. Experiments show that our method achieves higher representation ability than previous parametric models, and achieves competitive performance in several applications. To the best of our knowledge, our work is the first facial parametric model built upon a neural radiance field that can be used in fitting, generation and manipulation. Our code and model are released in https://github.com/zhuhao-nju/mofanerf.
翻译:我们提出一个参数模型,将自由视觉图像映射成一个由神经光亮场,即光谱面形、表情和外观编码的载体空间,使用神经光亮场,即光谱面形、表情和外观。具体地说,MoFaNeRF将编码面部形状、表情和外观与空间坐标和方向作为MLP的输入,并将空间点的亮度转化为光现实图像合成。与常规的3D变形模型(3DMMM)相比,MoFANeRF展示了直接合成光现实面部面部细节的优势,即使对眼睛、嘴和胡子而言也是如此。此外,通过对输入形状、表情和外观代码进行内插,持续面面面面面貌的变形很容易实现。通过引入特定身份的调节和纹理编码编码,我们的模型合成了准确的光度细节,并展示了很强的表达能力。我们的模型展示了多种应用的强大能力,包括基于图像的匹配、随机生成、调整、面部、面部编辑、面部编辑和新观点合成。实验显示我们的方法比先前的模型模型更具有更高的代表性模型能力,并在若干次模型中实现竞争性的表面调整应用应用应用。