Neural radiance fields (NeRF) show great success in novel view synthesis. However, in real-world scenes, recovering high-quality details from the source images is still challenging for the existing NeRF-based approaches, due to the potential imperfect calibration information and scene representation inaccuracy. Even with high-quality training frames, the synthetic novel views produced by NeRF models still suffer from notable rendering artifacts, such as noise, blur, etc. Towards to improve the synthesis quality of NeRF-based approaches, we propose NeRFLiX, a general NeRF-agnostic restorer paradigm by learning a degradation-driven inter-viewpoint mixer. Specially, we design a NeRF-style degradation modeling approach and construct large-scale training data, enabling the possibility of effectively removing NeRF-native rendering artifacts for existing deep neural networks. Moreover, beyond the degradation removal, we propose an inter-viewpoint aggregation framework that is able to fuse highly related high-quality training images, pushing the performance of cutting-edge NeRF models to entirely new levels and producing highly photo-realistic synthetic views.
翻译:神经辐射场(NeRF)已经在新视角合成方面获得了巨大成功。然而,在现实场景中,由于潜在的不完美校准信息和场景表示不准确性,从源图像中恢复高质量的细节仍然是现有基于 NeRF 的方法所面临的挑战。即使在高质量的训练帧下,NeRF 模型产生的合成新视图仍然存在明显的渲染工件,如噪声、模糊等。为了提高基于 NeRF 的方法的综合质量,我们提出了 NeRFLiX,一种通过学习退化驱动的视角混合器来进行通用 NeRF 修复的范例。特别地,我们设计了一种 NeRF 风格的退化建模方法,并构造了大规模的训练数据,使得现有的深度神经网络能够有效地消除 NeRF 本地渲染工件。此外,除了消除退化,我们还提出了一个视角聚合框架,能够融合高度相关的高质量训练图像,将前沿 NeRF 模型的表现提升到全新的水平,产生高度逼真的合成视图。