We propose VDN-NeRF, a method to train neural radiance fields (NeRFs) for better geometry under non-Lambertian surface and dynamic lighting conditions that cause significant variation in the radiance of a point when viewed from different angles. Instead of explicitly modeling the underlying factors that result in the view-dependent phenomenon, which could be complex yet not inclusive, we develop a simple and effective technique that normalizes the view-dependence by distilling invariant information already encoded in the learned NeRFs. We then jointly train NeRFs for view synthesis with view-dependence normalization to attain quality geometry. Our experiments show that even though shape-radiance ambiguity is inevitable, the proposed normalization can minimize its effect on geometry, which essentially aligns the optimal capacity needed for explaining view-dependent variations. Our method applies to various baselines and significantly improves geometry without changing the volume rendering pipeline, even if the data is captured under a moving light source. Code is available at: https://github.com/BoifZ/VDN-NeRF.
翻译:我们提出了VDN-NeRF方法,该方法通过视角依赖性归一化来训练神经辐射场(NeRF),以获得在非Lambert表面和动态照明条件下更好的几何形状。在不显式建模导致视角依赖性现象的基础因素的情况下,我们开发了一种简单有效的技术,通过提取已学习的NeRF中编码的不变信息来归一化视角依赖性。然后,我们联合训练具有视角依赖性归一化的NeRFs以获得质量几何形状。我们的实验表明,即使形状-辐射模糊是不可避免的,所提出的归一化方法也可以最小化其对几何形状的影响,从而实质上对齐了解释视角依赖性变化所需的最佳容量。即使数据是在移动光源下捕获的,该方法适用于各种基线,并且可以不更改体渲染管道而显着改进几何形状。代码可在https://github.com/BoifZ/VDN-NeRF上获得。