We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometry-aware consistency regularization. The proposed approach leverages a rendered depth map at unobserved viewpoint to warp sparse input images to the unobserved viewpoint and impose them as pseudo ground truths to facilitate learning of NeRF. By encouraging such geometry-aware consistency at a feature-level instead of using pixel-level reconstruction loss, we regularize the NeRF at semantic and structural levels while allowing for modeling view dependent radiance to account for color variations across viewpoints. We also propose an effective method to filter out erroneous warped solutions, along with training strategies to stabilize training during optimization. We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models. Project page is available at https://ku-cvlab.github.io/GeCoNeRF/.
翻译:我们提出了一个新框架,以在几光环境中规范神经辐射场(NERF),使其具有几何特征的一致性规范化。拟议方法利用未经观察的深度地图,将少量输入的图像转换成未观察的观点,并把它们作为假地面真相,以促进对NERF的学习。我们鼓励在地貌层面而不是使用像素级重建损失,从而在语义和结构层面将NERF规范化,同时允许建模依赖的光亮度,以说明不同观点的颜色差异。我们还提出了一种有效方法,以过滤错误扭曲的解决方案,同时提出在优化期间稳定培训的培训战略。我们展示了我们的模型取得了与最先进的微小NERF模型相比的竞争结果。项目网页见https://ku-cvlab.github.io/GeCONERF/。