We propose a novel 3D morphable model for complete human heads based on hybrid neural fields. At the core of our model lies a neural parametric representation which disentangles identity and expressions in disjoint latent spaces. To this end, we capture a person's identity in a canonical space as a signed distance field (SDF), and model facial expressions with a neural deformation field. In addition, our representation achieves high-fidelity local detail by introducing an ensemble of local fields centered around facial anchor points. To facilitate generalization, we train our model on a newly-captured dataset of over 2200 head scans from 124 different identities using a custom high-end 3D scanning setup. Our dataset significantly exceeds comparable existing datasets, both with respect to quality and completeness of geometry, averaging around 3.5M mesh faces per scan. Finally, we demonstrate that our approach outperforms state-of-the-art methods by a significant margin in terms of fitting error and reconstruction quality.
翻译:我们提出了基于混合神经场的全人头的新式3D变形模型。 在我们模型的核心, 是一个将身份和表达方式分解在不相干的潜在空间中的神经参数表达方式。 为此, 我们用一个签名的距离场( SDF) 来捕捉一个人在康纳空间的身份, 并用一个神经变形场( SDF) 来模拟面部表达方式。 此外, 我们的表示方式通过引入一个以面部锚定点为中心的地方字段的组合, 实现了高度不忠的地方细节。 为了便于概括化, 我们用一个自定义的高端 3D 扫描设置来训练我们的模型, 从124个不同身份上对2200个头部进行新捕获的数据集。 我们的数据集大大超过可比较的现有数据集, 两者在地理测量质量和完整性方面, 平均每个扫描大约3.5Mmsh脸。 最后, 我们证明我们的方法在准确性差与重建质量方面, 有很大的差差值。