Event cameras are bio-inspired sensors that offer advantages over traditional cameras. They operate asynchronously, sampling the scene at microsecond resolution and producing a stream of brightness changes. This unconventional output has sparked novel computer vision methods to unlock the camera's potential. Here, the problem of event-based stereo 3D reconstruction for SLAM is considered. Most event-based stereo methods attempt to exploit the high temporal resolution of the camera and the simultaneity of events across cameras to establish matches and estimate depth. By contrast, this work investigates how to estimate depth without explicit data association by fusing Disparity Space Images (DSIs) originated in efficient monocular methods. Fusion theory is developed and applied to design multi-camera 3D reconstruction algorithms that produce state-of-the-art results, as confirmed by comparisons with four baseline methods and tests on a variety of available datasets.
翻译:事件相机是生物感应器,比传统相机更具有优势。 它们不同步地运行,在微秒分辨率上对现场进行取样,并产生亮度变化。 这种非常规产出激发了新的计算机视觉方法,以释放相机的潜力。 这里,考虑了以事件为基础的立体立体立体重建SLAM的问题。 大多数事件立体法都试图利用相机的高时间分辨率和不同相机的事件的同时性来建立匹配和估计深度。 相反, 这项工作研究如何在没有明确数据联系的情况下,通过使用高效的单眼方法生成的分散空间图像来估计深度。 聚合理论被开发并应用于设计产生最新结果的多相机立体重建算法, 与四种基线方法的比较和对各种可用数据集的测试都证实了这一点。