Recovering the geometry of a human head from a single image, while factorizing the materials and illumination is a severely ill-posed problem that requires prior information to be solved. Methods based on 3D Morphable Models (3DMM), and their combination with differentiable renderers, have shown promising results. However, the expressiveness of 3DMMs is limited, and they typically yield over-smoothed and identity-agnostic 3D shapes limited to the face region. Highly accurate full head reconstructions have recently been obtained with neural fields that parameterize the geometry using multilayer perceptrons. The versatility of these representations has also proved effective for disentangling geometry, materials and lighting. However, these methods require several tens of input images. In this paper, we introduce SIRA, a method which, from a single image, reconstructs human head avatars with high fidelity geometry and factorized lights and surface materials. Our key ingredients are two data-driven statistical models based on neural fields that resolve the ambiguities of single-view 3D surface reconstruction and appearance factorization. Experiments show that SIRA obtains state of the art results in 3D head reconstruction while at the same time it successfully disentangles the global illumination, and the diffuse and specular albedos. Furthermore, our reconstructions are amenable to physically-based appearance editing and head model relighting.
翻译:将人类头部的几何形状从单一图像中恢复出来,同时将材料和光化因素考虑在内是一个严重错误的问题,需要事先解决信息。基于 3D 移动模型(3DMMM) 的方法及其与不同成型器的组合,都显示了令人乐观的结果。然而, 3DMMM 的清晰度有限, 通常会产生高度偏观和身份认知的3D形状, 仅限于面部区域。 最近,通过神经领域实现了高度准确的全面头部重建, 神经领域使用多层透视器对几何形状进行参数比较。 这些表达方式的多功能性也证明对不匹配的几度地貌、 材料和照明有效。 然而, 这些方法需要数十个输入图像。 在本文中,我们采用了SIRA这个方法, 从单一图像中, 以高度忠诚的几何和基于要素的模型和表面材料来重建人类头部。 我们的关键成份是两个数据驱动的统计模型, 以神经领域为基础, 解决单视3D 地表层的模糊性地表层重建和镜像的镜像结构, 实验显示全球结果, 。