We propose ClipFace, a novel self-supervised approach for text-guided editing of textured 3D morphable model of faces. Specifically, we employ user-friendly language prompts to enable control of the expressions as well as appearance of 3D faces. We leverage the geometric expressiveness of 3D morphable models, which inherently possess limited controllability and texture expressivity, and develop a self-supervised generative model to jointly synthesize expressive, textured, and articulated faces in 3D. We enable high-quality texture generation for 3D faces by adversarial self-supervised training, guided by differentiable rendering against collections of real RGB images. Controllable editing and manipulation are given by language prompts to adapt texture and expression of the 3D morphable model. To this end, we propose a neural network that predicts both texture and expression latent codes of the morphable model. Our model is trained in a self-supervised fashion by exploiting differentiable rendering and losses based on a pre-trained CLIP model. Once trained, our model jointly predicts face textures in UV-space, along with expression parameters to capture both geometry and texture changes in facial expressions in a single forward pass. We further show the applicability of our method to generate temporally changing textures for a given animation sequence.
翻译:我们提出Clipface, 这是一种全新的自我监督方式, 用于文本制导3D变形面容模型的文本制导编辑。 具体地说, 我们使用方便用户的语言提示来控制3D面容的表达和外观。 我们利用3D变形模型的几何表达性来调整3D变形模型的文本和表达性,这些模型本身具有有限的控制性和纹理表现性, 并开发一种自我监督的基因模型, 以3D 面容共同合成表达式、 纹理和表达式。 我们通过对抗性自我监督的自我监督培训, 使3D面孔的高质量纹理生成成为高品质的3D 。 我们使用对真实的 RGB 图像收藏的可调适应用性来引导。 我们用语言来调整3D 变形模型的文本和表达式表达式的几何表情。 我们经过培训后, 将用模型的文本缩图解方法对了我们前期的文本表达式进行进一步预测。