Novel view synthesis (NVS) is a challenging task in computer vision that involves synthesizing new views of a scene from a limited set of input images. Neural Radiance Fields (NeRF) have emerged as a powerful approach to address this problem, but they require accurate knowledge of camera \textit{intrinsic} and \textit{extrinsic} parameters. Traditionally, structure-from-motion (SfM) and multi-view stereo (MVS) approaches have been used to extract camera parameters, but these methods can be unreliable and may fail in certain cases. In this paper, we propose a novel technique that leverages unposed images from dynamic datasets, such as the NVIDIA dynamic scenes dataset, to learn camera parameters directly from data. Our approach is highly extensible and can be integrated into existing NeRF architectures with minimal modifications. We demonstrate the effectiveness of our method on a variety of static and dynamic scenes and show that it outperforms traditional SfM and MVS approaches. The code for our method is publicly available at \href{https://github.com/redacted/refinerf}{https://github.com/redacted/refinerf}. Our approach offers a promising new direction for improving the accuracy and robustness of NVS using NeRF, and we anticipate that it will be a valuable tool for a wide range of applications in computer vision and graphics.
翻译:在计算机视野中,新观点合成(NVS)是一项具有挑战性的任务,它涉及从有限的一组输入图像中合成一个场景的新观点。神经辐射场(NERF)已经成为解决这一问题的有力方法,但需要准确了解相机\ textit{intrinsic} 和\ textit{extrinsic} 参数。 传统上, 结构- 自动(SfM) 和多视图立体(MVS) 方法被用于提取相机参数, 但是这些方法可能不可靠, 在某些情况下可能会失败。 在本文中, 我们提出了一个创新技术, 利用动态数据集( 如 NIVIDIA 动态场景数据集) 提供的未保存图像方向, 直接从数据中学习相机参数。 我们的方法非常可扩展, 并且可以与现有的 NERF结构结构整合, 只需稍加修改即可。 我们展示我们的方法在各种静态和动态场景上的有效性, 并显示它将超越传统的SfM和MVSS 方法的精度应用。 我们的方法的代码在NEfreafreal a rual a ruditional rutional a ruditional a ruditional rufrestrab/freabreal 和 wefreal resmusmus/frection___________s awegismwebs aregyal_______________________ a 和我们可以公开提供我们的工具。</s>