We introduce a novel framework for solving inverse problems using NeRF-style generative models. We are interested in the problem of 3-D scene reconstruction given a single 2-D image and known camera parameters. We show that naively optimizing the latent space leads to artifacts and poor novel view rendering. We attribute this problem to volume obstructions that are clear in the 3-D geometry and become visible in the renderings of novel views. We propose a novel radiance field regularization method to obtain better 3-D surfaces and improved novel views given single view observations. Our method naturally extends to general inverse problems including inpainting where one observes only partially a single view. We experimentally evaluate our method, achieving visual improvements and performance boosts over the baselines in a wide range of tasks. Our method achieves $30-40\%$ MSE reduction and $15-25\%$ reduction in LPIPS loss compared to the previous state of the art.
翻译:我们采用NERF式的基因模型,引入了解决反向问题的新框架。我们关心三维场景重建问题,给出了单一的二维图像和已知的摄像参数。我们显示,天真地优化潜伏空间会导致人工制品的形成和新颖的视野差。我们将此问题归因于在三维几何学中清晰可见的量性障碍,并在新观点的形成中可见。我们提出了一个新的光亮场规范化方法,以获得更好的三维表面,并改进单一视图的新观点。我们的方法自然延伸到一般的反面问题,包括只对一个视图进行部分的油漆。我们实验性地评估了我们的方法,在一系列广泛的任务中实现了视觉改进和性能提升。我们的方法实现了30-40 $ MSE 的减少和 15-25 $ LPIPS 损失的减少, 与之前的艺术状况相比。