We present PanoHDR-NeRF, a novel pipeline to casually capture a plausible full HDR radiance field of a large indoor scene without elaborate setups or complex capture protocols. First, a user captures a low dynamic range (LDR) omnidirectional video of the scene by freely waving an off-the-shelf camera around the scene. Then, an LDR2HDR network uplifts the captured LDR frames to HDR, subsequently used to train a tailored NeRF++ model. The resulting PanoHDR-NeRF pipeline can estimate full HDR panoramas from any location of the scene. Through experiments on a novel test dataset of a variety of real scenes with the ground truth HDR radiance captured at locations not seen during training, we show that PanoHDR-NeRF predicts plausible radiance from any scene point. We also show that the HDR images produced by PanoHDR-NeRF can synthesize correct lighting effects, enabling the augmentation of indoor scenes with synthetic objects that are lit correctly.
翻译:我们介绍PanoHDR-NERF,这是一条新颖的管道,可以随意捕捉一个无详细设置或复杂抓捕协议的大型室内场景的《人类发展报告》整片发光场。首先,用户通过在现场自由挥动现成的照相机,捕捉到现场的低动态范围(LDR)全方向视频。然后,一个LDR-HDR网络将捕获的《人类发展报告》框架提升到《人类发展报告》上,随后用于培训一个定制的NERF++模型。由此产生的《人类发展报告》-NERF管道可以从现场的任何地方估计《人类发展报告》全文。通过在培训期间没有看到的地点对各种真实场景进行试验,我们展示PanoHDR-NERF从任何场景点上都预测了看似光亮的情景。我们还表明,PanoHDRDR-NERF制作的《人类发展报告》图像可以合成光效应,使室内景区能够以正确点亮的合成物体增缩。