We show that ensembling effectively quantifies model uncertainty in Neural Radiance Fields (NeRFs) if a density-aware epistemic uncertainty term is considered. The naive ensembles investigated in prior work simply average rendered RGB images to quantify the model uncertainty caused by conflicting explanations of the observed scene. In contrast, we additionally consider the termination probabilities along individual rays to identify epistemic model uncertainty due to a lack of knowledge about the parts of a scene unobserved during training. We achieve new state-of-the-art performance across established uncertainty quantification benchmarks for NeRFs, outperforming methods that require complex changes to the NeRF architecture and training regime. We furthermore demonstrate that NeRF uncertainty can be utilised for next-best view selection and model refinement.
翻译:我们发现,如果考虑一个密度觉醒的隐性不确定性术语,那么在神经辐射场(NeRFs)中的模型不确定性就会被有效量化。在先前工作中调查的天真的集合只是平均地使RGB图像量化了对观测场景的解释不一致造成的模型不确定性。相反,我们进一步考虑单个射线的终止概率,以确定由于对培训期间未观察到的场景部分缺乏了解而导致的特征模型不确定性。我们在为NeRFs确定的不确定性量化基准中取得了新的最先进的业绩,超过了需要对NeRF结构和培训制度进行复杂修改的绩效方法。我们进一步证明,NERF不确定性可用于下一个最佳的视图选择和模型改进。