Recent advances in neural radiance fields (NeRFs) achieve state-of-the-art novel view synthesis and facilitate dense estimation of scene properties. However, NeRFs often fail for large, unbounded scenes that are captured under very sparse views with the scene content concentrated far away from the camera, as is typical for field robotics applications. In particular, NeRF-style algorithms perform poorly: (1) when there are insufficient views with little pose diversity, (2) when scenes contain saturation and shadows, and (3) when finely sampling large unbounded scenes with fine structures becomes computationally intensive. This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes that are observed from sparse input sensor views. This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively. In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for volumetric rendering in metric space. Through extensive quantitative and qualitative experiments on scenes from the KITTI dataset, this paper demonstrates that the proposed method outperforms state-of-the-art NeRF models on both novel view synthesis and dense depth prediction tasks when trained on sparse input data.
翻译:最近神经辐射场 (NeRFs) 取得了最先进的新视角合成,并促进了场景属性的密集估计。然而,对于从非常稀疏的视图中捕获的远离相机的场景,即典型的现场机器人应用情形,NeRFs 通常失败。 特别地,NeRF 风格的算法在以下情况下表现不佳:(1) 视图不足,姿态差异小, (2) 场景包含饱和度和阴影,以及 (3) 精细采样大型无边界场景的细节结构会变得计算密集。本文提出了 CLONeR,它通过将 NeRF 模型中的占据和颜色学习分解为分别用于训练 LiDAR 和相机数据的分离多层感知器 (MLP) 来显著改进 NeRF。此外,本文提出了一种新方法,在 NeRF 模型旁边构建可微分的三维占据栅格图 (OGM),并利用此占据栅格来改进体积渲染时沿射线采样点的数量。通过对KITTI数据集中的场景进行广泛的定量和定性实验,本文证明了在稀疏输入数据上训练时,所提出的方法在新视角合成和密集深度预测任务中均优于最先进的 NeRF 模型。