Common capture low-light scenes are challenging for most computer vision techniques, including Neural Radiance Fields (NeRF). Vanilla NeRF is viewer-centred that simplifies the rendering process only as light emission from 3D locations in the viewing direction, thus failing to model the low-illumination induced darkness. Inspired by emission theory of ancient Greek that visual perception is accomplished by rays casting from eyes, we make slight modifications on vanilla NeRF to train on multiple views of low-light scene, we can thus render out the well-lit scene in an unsupervised manner. We introduce a surrogate concept, Concealing Fields, that reduce the transport of light during the volume rendering stage. Specifically, our proposed method, Aleth-NeRF, directly learns from the dark image to understand volumetric object representation and concealing field under priors. By simply eliminating Concealing Fields, we can render a single or multi-view well-lit image(s) and gain superior performance over other 2D low light enhancement methods. Additionally, we collect the first paired LOw-light and normal-light Multi-view (LOM) datasets for future research.
翻译:对大多数计算机视觉技术,包括神经辐射场(NERF)来说,常见的低光捕捉场景对大多数计算机视觉技术,包括神经辐射场(NERF)都具有挑战性。Vanilla NeRF以视觉为中心,它简化了制成过程,只是作为3D地点在观光方向的光散射,因此未能模拟低光诱发的黑暗。古希腊排放理论认为视觉通过射线从眼睛中抛射完成,我们对Vanilla NeRF略作修改,以训练对低光场的多重观点,因此我们可以以不受监督的方式将光场变出。我们引入了一种代孕概念,即控制场,以减少音量转换阶段的光传输。具体地说,我们提议的Aleth-NERF方法直接从暗图中学习了解体积物体的表达方式和隐藏场面。我们只要消除迷彩场,就可以制作单一或多光光光场图像,并获得优于其他2D低光增强方法的性能。此外,我们收集了第一个配对阵光和普通多光多光多光部未来数据。</s>