Recent advances in neural radiance fields (NeRFs) achieve state-of-the-art novel view synthesis and facilitate dense estimation of scene properties. However, NeRFs often fail for large, unbounded scenes that are captured under very sparse views with the scene content concentrated far away from the camera, as is typical for field robotics applications. In particular, NeRF-style algorithms perform poorly: (1) when there are insufficient views with little pose diversity, (2) when scenes contain saturation and shadows, and (3) when finely sampling large unbounded scenes with fine structures becomes computationally intensive. This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes that are observed from sparse input sensor views. This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively. In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for volumetric rendering in metric space. Through extensive quantitative and qualitative experiments on scenes from the KITTI dataset, this paper demonstrates that the proposed method outperforms state-of-the-art NeRF models on both novel view synthesis and dense depth prediction tasks when trained on sparse input data.
翻译:神经光流场(NeRFs)的最新进展实现了最先进的新视角合成,并促进了对场景属性的密集估计。然而,对于从稀疏视图捕获的场景内容集中在摄像机远处的大型,无界场景,NeRF经常失败,这是典型的领域机器人应用。特别是,当视图不足且姿态多样性不足时,当场景包含饱和和阴影时,以及当对大型无边界场景进行细粒度采样以获得细节结构变得计算密集时,类似NeRF的算法效果往往不佳。该论文提出了CLONeR,通过允许NeRF建模大型戶外驾驶场景,从而显着改进了NeRF。这是通过将NeRF框架中的占用和颜色学习解耦为分别使用LiDAR和相机数据训练的多层感知机(MLP)实现的。此外,本文提出了一种新的方法,在NeRF模型旁边构建可微分的三维占用网格映射(OGM),并利用此占用网格来改进度量空间中的体积渲染下沿射线采样的精度。通过对KITTI数据集中的场景进行广泛的定量和定性实验,本文证明了所提出的方法在从稀疏输入数据进行训练时在新视角合成和密集深度预测任务方面优于最先进的NeRF模型。