3D reconstruction from multiple views is a successful computer vision field with multiple deployments in applications. State of the art is based on traditional RGB frames that enable optimization of photo-consistency cross views. In this paper, we study the problem of 3D reconstruction from event-cameras, motivated by the advantages of event-based cameras in terms of low power and latency as well as by the biological evidence that eyes in nature capture the same data and still perceive well 3D shape. The foundation of our hypothesis that 3D reconstruction is feasible using events lies in the information contained in the occluding contours and in the continuous scene acquisition with events. We propose Apparent Contour Events (ACE), a novel event-based representation that defines the geometry of the apparent contour of an object. We represent ACE by a spatially and temporally continuous implicit function defined in the event x-y-t space. Furthermore, we design a novel continuous Voxel Carving algorithm enabled by the high temporal resolution of the Apparent Contour Events. To evaluate the performance of the method, we collect MOEC-3D, a 3D event dataset of a set of common real-world objects. We demonstrate the ability of EvAC3D to reconstruct high-fidelity mesh surfaces from real event sequences while allowing the refinement of the 3D reconstruction for each individual event.
翻译:3D多视角重建是一种成功的计算机视觉领域,在多个应用中得到广泛应用。目前的技术基于传统的RGB帧,可以优化跨视觉的照片一致性。本文研究了基于事件相机的3D重建问题,由于事件相机在低功耗、低延迟以及生物学证据方面的优点,激发了我们的兴趣。我们的假设是基于事件的3D重建的可行性,其基础在于形成物体遮挡轮廓和事件连续场景获取所包含的信息。我们提出了Apparent Contour Events (ACE),这是一种基于事件的新型表示方法,它定义了物体的表面轮廓几何形态。我们通过在事件x-y-t空间中定义连续的空间和时间隐式函数来表示ACE。此外,我们设计了一种新颖的连续光栅化算法,利用了Apparent Contour Events的高时空分辨率。为了评估该方法的性能,我们收集了MOEC-3D,这是一个包含常见现实世界物体的3D事件数据集。我们展示了EvAC3D重建实际事件序列的高保真网格表面的能力,并允许针对每个单独事件进行3D重建的完善。