Purely MLP-based neural radiance fields (NeRF-based methods) often suffer from underfitting with blurred renderings on large-scale scenes due to limited model capacity. Recent approaches propose to geographically divide the scene and adopt multiple sub-NeRFs to model each region individually, leading to linear scale-up in training costs and the number of sub-NeRFs as the scene expands. An alternative solution is to use a feature grid representation, which is computationally efficient and can naturally scale to a large scene with increased grid resolutions. However, the feature grid tends to be less constrained and often reaches suboptimal solutions, producing noisy artifacts in renderings, especially in regions with complex geometry and texture. In this work, we present a new framework that realizes high-fidelity rendering on large urban scenes while being computationally efficient. We propose to use a compact multiresolution ground feature plane representation to coarsely capture the scene, and complement it with positional encoding inputs through another NeRF branch for rendering in a joint learning fashion. We show that such an integration can utilize the advantages of two alternative solutions: a light-weighted NeRF is sufficient, under the guidance of the feature grid representation, to render photorealistic novel views with fine details; and the jointly optimized ground feature planes, can meanwhile gain further refinements, forming a more accurate and compact feature space and output much more natural rendering results.
翻译:纯MLP-based的神经辐射场(NeRF-based方法)通常会因模型容量有限而出现欠拟合,在大规模场景中呈现模糊的渲染结果。近期的方法提出通过地理划分场景并采用多个子NeRFs单独建模每个区域,从而实现训练成本和NeRF子数的线性扩展随着场景扩大。另一种解决方案是使用特征网格表示,它具有计算效率高、可以自然扩展到大型场景的特点。然而,特征网格倾向于不受限制且经常达到次优解,导致渲染中出现噪点制品,特别是在具有复杂几何和纹理的区域。在本文中,我们提出了一个新的框架,可以在计算效率高的情况下实现大型城市场景的高保真渲染。我们建议使用一种多分辨率地面特征平面表示法来粗略地捕捉场景,并通过另一个NeRF分支的位置编码输入来补充渲染以进行联合学习。我们展示了这种整合可以利用两种替代方案的优势:在特征网格表示的指导下,轻量级的NeRF足以呈现具有细节的照片般逼真的新视图;与此同时,联合优化的地面特征平面可以获得进一步的细化,形成更准确紧凑的特征空间,并输出更自然的渲染结果。