We present PanoHDR-NeRF, a neural representation of the full HDR radiance field of an indoor scene, and a pipeline to capture it casually, without elaborate setups or complex capture protocols. First, a user captures a low dynamic range (LDR) omnidirectional video of the scene by freely waving an off-the-shelf camera around the scene. Then, an LDR2HDR network uplifts the captured LDR frames to HDR, which are used to train a tailored NeRF++ model. The resulting PanoHDR-NeRF can render full HDR images from any location of the scene. Through experiments on a novel test dataset of real scenes with the ground truth HDR radiance captured at locations not seen during training, we show that PanoHDR-NeRF predicts plausible HDR radiance from any scene point. We also show that the predicted radiance can synthesize correct lighting effects, enabling the augmentation of indoor scenes with synthetic objects that are lit correctly. Datasets and code are available at https://lvsn.github.io/PanoHDR-NeRF/.
翻译:我们介绍PanoHDR-NERF,这是一份室内场景中《人类发展报告》完整光亮场的神经代表,也是一条随意捕捉它的管道,没有精心设置或复杂的抓捕程序。首先,一个用户通过在现场周围自由挥动一个现成的照相机,捕捉了现场的低动态范围(LDR)全方向视频。然后,一个LDRHDR网络将捕获的LDR框架提升到《人类发展报告》上,用于培训一个定制的NERF++模型。由此产生的PanoHDR-NERF可以提供来自现场任何地点的完整《人类发展报告》图像。通过在培训期间没有看到的地点拍摄的关于真实场景的新型测试数据集的实验,我们显示PanoHDR-NERF从任何场景点都预示着《人类发展报告》的光亮度。我们还表明,预计的光亮度可以综合正确的照明效果,使室内景色能够以正确亮的合成物体放大。数据集和代码可在https://lvsn.github.io/PanoHDR-NERF/。