Neural Radiance Field (NeRF) has demonstrated excellent quality in novel view synthesis, thanks to its ability to model 3D object geometries in a concise formulation. However, current approaches to NeRF-based models rely on clean images with accurate camera calibration, which can be difficult to obtain in the real world, where data is often subject to corruption and distortion. In this work, we provide the first comprehensive analysis of the robustness of NeRF-based novel view synthesis algorithms in the presence of different types of corruptions. We find that NeRF-based models are significantly degraded in the presence of corruption, and are more sensitive to a different set of corruptions than image recognition models. Furthermore, we analyze the robustness of the feature encoder in generalizable methods, which synthesize images using neural features extracted via convolutional neural networks or transformers, and find that it only contributes marginally to robustness. Finally, we reveal that standard data augmentation techniques, which can significantly improve the robustness of recognition models, do not help the robustness of NeRF-based models. We hope that our findings will attract more researchers to study the robustness of NeRF-based approaches and help to improve their performance in the real world.
翻译:在新观点合成中,NeRF型模型由于能够用简洁的配方模型模拟3D天体形状,展示出高质量的新观点合成(NeRF),展示出高质量的创新观点合成法(NeRF),然而,目前对NeRF型模型采用的方法依赖于使用精确的相机校准的清洁图像,而在现实世界中,数据往往受到腐蚀和扭曲,很难获得这些数据。在这项工作中,我们首次全面分析了基于NeRF型新观点合成算法的稳健性。我们发现,基于NeRF的模型在腐败面前大大退化,而且比图像识别模型更加敏感。此外,我们分析利用通过进化神经网络或变异器提取的神经特征合成图像的特征的强性。我们发现,标准数据增强技术能够大大改进识别模型的稳健性,但无助于NERF型模型的稳健性。我们希望我们的发现能够吸引更多的研究人员来研究基于NRF型方法的真实性,从而改善世界的绩效。