Training a Neural Radiance Field (NeRF) without pre-computed camera poses is challenging. Recent advances in this direction demonstrate the possibility of jointly optimising a NeRF and camera poses in forward-facing scenes. However, these methods still face difficulties during dramatic camera movement. We tackle this challenging problem by incorporating undistorted monocular depth priors. These priors are generated by correcting scale and shift parameters during training, with which we are then able to constrain the relative poses between consecutive frames. This constraint is achieved using our proposed novel loss functions. Experiments on real-world indoor and outdoor scenes show that our method can handle challenging camera trajectories and outperforms existing methods in terms of novel view rendering quality and pose estimation accuracy. Our project page is https://nope-nerf.active.vision.
翻译:在没有预先计算的摄像机姿态的情况下训练神经辐射场(NeRF)非常具有挑战性。最近在这个方向上的一些进展证明了在前向场景中联合优化NeRF和摄像机姿态的可能性。然而,在剧烈的相机运动期间,这些方法仍然面临困难。我们通过加入无畸变单ocular深度先验来解决这个棘手的问题。这些先验由修正训练期间的尺度和偏移参数生成,然后我们能够使用我们提出的新的损失函数来约束连续帧之间的相对姿态。实验表明,在实际的室内和室外场景中,我们的方法能够处理具有挑战性的相机轨迹,并在新视角渲染质量和姿态估计精度方面优于现有方法。我们项目的页面是 https://nope-nerf.active.vision。