Novel view synthesis is a challenging and ill-posed inverse rendering problem. Neural rendering techniques have recently achieved photorealistic image quality for this task. State-of-the-art (SOTA) neural volume rendering approaches, however, are slow to train and require minutes of inference (i.e., rendering) time for high image resolutions. We adopt high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images. Our neural rendering pipeline accelerates SOTA neural volume rendering by about two orders of magnitude and our implicit surface representation is unique in allowing us to export a mesh with view-dependent texture information. Thus, like other implicit surface representations, ours is compatible with traditional graphics pipelines, enabling real-time rendering rates, while achieving unprecedented image quality compared to other surface methods. We assess the quality of our approach using existing datasets as well as high-quality 3D face data captured with a custom multi-camera rig.
翻译:神经合成技术最近为这项任务实现了光现实化图像质量。 然而,最先进的神经体积生成方法在培训方面缓慢,需要几分钟的推理时间才能获得高图像分辨率。 我们采用高能力神经场景演示,定期激活,共同优化隐形表面和光亮场,仅以2D图像显示的场景。 我们的神经投影管道加速了SOTA神经体积的形成,大约两个数量级,而我们隐含的表面表示方式是独一无二的,使我们能够输出带有依赖视觉的纹理信息的网格。 因此,与其他隐含的表面表示方式一样,我们与传统的图形管道兼容,能够实时投影率,同时与其他表面方法相比达到前所未有的图像质量。 我们利用现有的数据集以及用定制多相机采集的高质量3D脸数据评估我们的方法的质量。