Novel view synthesis using neural radiance fields (NeRF) is the state-of-the-art technique for generating high-quality images from novel viewpoints. Existing methods require a priori knowledge about extrinsic and intrinsic camera parameters. This limits their applicability to synthetic scenes, or real-world scenarios with the necessity of a preprocessing step. Current research on the joint optimization of camera parameters and NeRF focuses on refining noisy extrinsic camera parameters and often relies on the preprocessing of intrinsic camera parameters. Further approaches are limited to cover only one single camera intrinsic. To address these limitations, we propose a novel end-to-end trainable approach called NeRFtrinsic Four. We utilize Gaussian Fourier features to estimate extrinsic camera parameters and dynamically predict varying intrinsic camera parameters through the supervision of the projection error. Our approach outperforms existing joint optimization methods on LLFF and BLEFF. In addition to these existing datasets, we introduce a new dataset called iFF with varying intrinsic camera parameters. NeRFtrinsic Four is a step forward in joint optimization NeRF-based view synthesis and enables more realistic and flexible rendering in real-world scenarios with varying camera parameters.
翻译:使用神经弧度场( NERF) 的神经光谱新视觉合成( NERF) 是从新角度生成高质量图像的最先进技术。 现有方法需要先验地了解外部和内在摄像参数。 这限制了这些参数对合成场景或现实世界情景的适用性,而需要有一个预处理步骤。 目前关于联合优化相机参数和 NERF 的研究侧重于精炼噪音的外部摄像参数,并经常依靠对内在摄像参数的预处理。 进一步的方法仅限于仅涵盖一个单一的摄像头。 为了应对这些限制,我们建议了一种新型的端到端可训练方法,叫做 NeRFtrinsic Four。 我们利用高斯四人特征来估计外部摄像参数,并通过对投影错误进行监督,动态地预测不同的内在摄像参数。 我们的方法超越了LLLOFF 和 BLOWFF 的现有联合优化方法。 除了这些现有的数据集外,我们还引入了一个新的数据集, 称为iFF, 其内在的参数各不相同。 NERFs 4 是联合优化NRF- groduf- groduction- group the a step a step a step a step pep step step step pride step step prep step view step step viewide des</s>