We present Neural Head Avatars, a novel neural representation that explicitly models the surface geometry and appearance of an animatable human avatar that can be used for teleconferencing in AR/VR or other applications in the movie or games industry that rely on a digital human. Our representation can be learned from a monocular RGB portrait video that features a range of different expressions and views. Specifically, we propose a hybrid representation consisting of a morphable model for the coarse shape and expressions of the face, and two feed-forward networks, predicting vertex offsets of the underlying mesh as well as a view- and expression-dependent texture. We demonstrate that this representation is able to accurately extrapolate to unseen poses and view points, and generates natural expressions while providing sharp texture details. Compared to previous works on head avatars, our method provides a disentangled shape and appearance model of the complete human head (including hair) that is compatible with the standard graphics pipeline. Moreover, it quantitatively and qualitatively outperforms current state of the art in terms of reconstruction quality and novel-view synthesis.
翻译:我们展示了神经头阿凡达斯(Neoral Head Astatars)这个新型神经代表器,它明确模拟了可以用于AR/VR或电影或游戏行业中依赖数字人的远程会议或其它应用的可被应用到AR/VR或电影或游戏行业中的电讯会议的表面几何和外观。我们可以通过一个单镜 RGB 肖像视频来学习我们的代表性,该视频有各种不同的表达方式和观点。具体地说,我们提出了一种混合代表器,它包括一个可变形的面部和面部表达形态和表达方式模型,以及两个进化前网络,预测了底部网形的顶部偏移以及视和表达的纹理。我们证明这种代表器能够精确地推断出看不见的姿势和点,并在提供锐利的纹理细节的同时生成自然表达方式。与以前关于头形的作品相比,我们的方法提供了与标准图形管道兼容的完整人头(包括毛)的形状和外观模型。此外,我们的方法在数量上和质量上都比了艺术的现状。