Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis performance. The core approach is to render individual rays by querying a neural network at points sampled along the ray to obtain the density and colour of the sampled points, and integrating this information using the rendering equation. Since dense sampling is computationally prohibitive, a common solution is to perform coarse-to-fine sampling. In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a heuristic and not trained end-to-end for the task at hand. We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture. Training the proposal module from scratch can be unstable due to lack of supervision, so an effective pre-training strategy is also put forward. The approach, named `NeRF in detail' (NeRF-ID), achieves superior view synthesis quality over NeRF and the state-of-the-art on the synthetic Blender benchmark and on par or better performance on the real LLFF-NeRF scenes. Furthermore, by leveraging the predicted sample importance, a 25% saving in computation can be achieved without significantly sacrificing the rendering quality.
翻译:核心方法是通过在光谱取样点对神经网络进行查询,以获得抽样点的密度和颜色,并使用造色方程整合这些信息。由于密集采样在计算上令人望而却步,一个共同的解决办法是进行粗到软采样。在这项工作中,我们处理的是香草粗到软采样方法的明确局限性 -- -- 其基础是超常的、未经培训的最终到最终完成的任务。我们引入了一个不同的模块,学习提出样品及其对于精细网络的重要性,考虑和比较多种替代品以建立神经结构。由于缺乏监督,从零到零的培训模块可能不稳定,因此也提出了有效的培训前战略。这个名为“NERF”的方法(NERF-ID)的方法在合成质量方面优于NERF,在合成Blender基准和对精细网络的重要性上达到最佳的状态。在实际25-LAFF-R的模型中,通过不大幅利用预测质量的模型,可以使实际25-MALFF-R的模型实现大幅的升级。