Neural Radiance Fields (NeRFs) aim to synthesize novel views of objects and scenes, given the object-centric camera views with large overlaps. However, we conjugate that this paradigm does not fit the nature of the street views that are collected by many self-driving cars from the large-scale unbounded scenes. Also, the onboard cameras perceive scenes without much overlapping. Thus, existing NeRFs often produce blurs, 'floaters' and other artifacts on street-view synthesis. In this paper, we propose a new street-view NeRF (S-NeRF) that considers novel view synthesis of both the large-scale background scenes and the foreground moving vehicles jointly. Specifically, we improve the scene parameterization function and the camera poses for learning better neural representations from street views. We also use the the noisy and sparse LiDAR points to boost the training and learn a robust geometry and reprojection based confidence to address the depth outliers. Moreover, we extend our S-NeRF for reconstructing moving vehicles that is impracticable for conventional NeRFs. Thorough experiments on the large-scale driving datasets (e.g., nuScenes and Waymo) demonstrate that our method beats the state-of-the-art rivals by reducing 7% to 40% of the mean-squared error in the street-view synthesis and a 45% PSNR gain for the moving vehicles rendering.
翻译:神经辐射场( NeRFs ) 旨在综合物体和场景的新观点。 鉴于以物体为中心的相机视图存在大量重叠, 我们发现这个范式并不符合许多自行驾驶的汽车从大型无界场景中收集的街头观点的性质。 此外, 机上摄像头可以看到场景, 没有太多重叠。 因此, 现有的 NERFs 经常在街景合成中产生模糊、 “ 浮标” 和其他艺术品。 本文中, 我们提议建立一个新的街景内景( S- NERF ), 考虑大型背景场景和前地移动车辆的新型观点合成。 具体地说, 我们改进了场景参数化功能, 相机可以学习更好的街景观中神经表表表表表表表表表。 我们还使用杂乱和分散的LIDAR 点来推动培训, 并学习以信心为基础的强度地理测量和再预测。 此外, 我们提出的S- NERF 重建移动车辆的S- 工具是无法对常规的 NERF 和前方移动车辆进行新的综合的, 45 显示我们大比例的 RRF 的路径上的路径 。 和冲图图图图图图图图图图图图图图图图, 以降低 以降低 的路径 以降低 降低 以显示我们前进的路径 的路径 的路径 的路径 的路径 的路径 。</s>