Asynchronously operating event cameras find many applications due to their high dynamic range, vanishingly low motion blur, low latency and low data bandwidth. The field saw remarkable progress during the last few years, and existing event-based 3D reconstruction approaches recover sparse point clouds of the scene. However, such sparsity is a limiting factor in many cases, especially in computer vision and graphics, that has not been addressed satisfactorily so far. Accordingly, this paper proposes the first approach for 3D-consistent, dense and photorealistic novel view synthesis using just a single colour event stream as input. At its core is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels. Next, our ray sampling strategy is tailored to events and allows for data-efficient training. At test, our method produces results in the RGB space at unprecedented quality. We evaluate our method qualitatively and numerically on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings than the existing methods. We also demonstrate robustness in challenging scenarios with fast motion and under low lighting conditions. We release the newly recorded dataset and our source code to facilitate the research field, see https://4dqv.mpi-inf.mpg.de/EventNeRF.
翻译:异步运行的事件相机由于其高动态范围、几乎无运动模糊、低延迟和低数据带宽而找到了许多应用。在过去的几年里,该领域取得了显着进展,现有的基于事件的三维重建方法可以恢复场景的稀疏点云。然而,这种稀疏性在许多情况下是一个限制因素,特别是在计算机视觉和图形方面,迄今为止尚未得到令人满意的解决。因此,本文提出了一种仅使用单个彩色事件流作为输入,实现了3D一致、密集和逼真的新视角综合的方法。其核心是完全在自监督的情况下从事件中训练出来的神经辐射场,同时保留彩色事件通道的原始分辨率。其次,我们的射线采样策略是针对事件量身定制的,并允许数据有效的训练。在测试中,我们的方法以前所未有的质量在RGB空间中产生结果。我们在几个具有挑战性的合成和实际场景中 qualitatively 和 numerically 评估了我们的方法,并显示它比现有方法产生了显著更密集和更具视觉吸引力的渲染。我们还展示了在快速运动和低照明条件下的挑战性情况下的稳健性。我们发布了新记录的数据集和我们的源代码,以促进研究领域,https://4dqv.mpi-inf.mpg.de/EventNeRF。