Asynchronously operating event cameras find many applications due to their high dynamic range, no motion blur, low latency and low data bandwidth. The field has seen remarkable progress during the last few years, and existing event-based 3D reconstruction approaches recover sparse point clouds of the scene. However, such sparsity is a limiting factor in many cases, especially in computer vision and graphics, that has not been addressed satisfactorily so far. Accordingly, this paper proposes the first approach for 3D-consistent, dense and photorealistic novel view synthesis using just a single colour event stream as input. At the core of our method is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels. Next, our ray sampling strategy is tailored to events and allows for data-efficient training. At test, our method produces results in the RGB space at unprecedented quality. We evaluate our method qualitatively and quantitatively on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings than the existing methods. We also demonstrate robustness in challenging scenarios with fast motion and under low lighting conditions. We will release our dataset and our source code to facilitate the research field, see https://4dqv.mpi-inf.mpg.de/EventNeRF/.
翻译:连续运行的活动相机因其高动态范围、没有运动模糊、低悬浮和低数据带宽而发现许多应用。在过去几年中,实地取得了显著的进展,现有的基于事件的三维重建方法恢复了现场的零星云层。然而,在许多情况下,特别是在计算机视觉和图形方面,这种偏狭性是一个限制因素,迄今为止尚未令人满意地得到解决。因此,本文件建议了3D一致、密集和摄影现实的新观点合成的第一个方法,仅使用一个彩色事件流作为投入。我们的方法的核心是完全以自我监督的方式从事件中从事件中得到训练的神经光亮场,同时保留彩色事件频道的原始分辨率。接下来,我们的光采样战略是针对事件而设计的,可以进行数据效率高的培训。在试验中,我们的方法以前所未有的质量在RGB空间产生结果。我们从质量上和数量上评价了我们的方法,并用一个不同的彩色事件流流流来显示它比我们现有的方法更稠密和更直观的图像。我们还在具有挑战性的情景中展示稳健健健性,同时保持快速运动和低光度的研究环境。我们将在低光源下发布我们的数据源/Refrfrum4。我们将看到数据。