In this paper, we focus on the problem of rendering novel views from a Neural Radiance Field (NeRF) under unobserved light conditions. To this end, we introduce a novel dataset, dubbed ReNe (Relighting NeRF), framing real world objects under one-light-at-time (OLAT) conditions, annotated with accurate ground-truth camera and light poses. Our acquisition pipeline leverages two robotic arms holding, respectively, a camera and an omni-directional point-wise light source. We release a total of 20 scenes depicting a variety of objects with complex geometry and challenging materials. Each scene includes 2000 images, acquired from 50 different points of views under 40 different OLAT conditions. By leveraging the dataset, we perform an ablation study on the relighting capability of variants of the vanilla NeRF architecture and identify a lightweight architecture that can render novel views of an object under novel light conditions, which we use to establish a non-trivial baseline for the dataset. Dataset and benchmark are available at https://eyecan-ai.github.io/rene.
翻译:在本文中,我们关注通过神经辐射场(NeRF)在未被观察到的光照条件下渲染新视图的问题。为此,我们介绍了一个新颖的数据集,命名为ReNe(Relighting NeRF),在一个光源一个时间(OLAT)条件下拍摄真实世界物体的照片,带有准确的地面真实相机和光照姿势标注。我们的采集管道利用两个机械臂,分别持有相机和全向点光源。我们发布了总共20个场景,描绘了各种具有复杂几何形状和具有挑战性的材料的物体。每个场景包括2000张图像,从50个不同的视角在40个不同的OLAT条件下获取。通过利用数据集,我们对vanilla NeRF架构的重照明能力进行消融研究,并确定一种轻量级架构,可以在新的光照条件下渲染物体的新视图,我们使用它来建立数据集的非平凡基准线。数据集和基准可在https://eyecan-ai.github.io/rene获得。