Event cameras are novel bio-inspired sensors that measure per-pixel brightness differences asynchronously. Recovering brightness from events is appealing since the reconstructed images inherit the high dynamic range (HDR) and high-speed properties of events; hence they can be used in many robotic vision applications and to generate slow-motion HDR videos. However, state-of-the-art methods tackle this problem by training an event-to-image recurrent neural network (RNN), which lacks explainability and is difficult to tune. In this work we show, for the first time, how tackling the joint problem of motion and brightness estimation leads us to formulate event-based image reconstruction as a linear inverse problem that can be solved without training an image reconstruction RNN. Instead, classical and learning-based image priors can be used to solve the problem and remove artifacts from the reconstructed images. The experiments show that the proposed approach generates images with visual quality on par with state-of-the-art methods despite only using data from a short time interval. The proposed linear formulation and solvers have a unifying character because they can be applied also to reconstruct brightness from the second derivative. Additionally, the linear formulation is attractive because it can be naturally combined with super-resolution, motion-segmentation and color demosaicing.
翻译:事件摄像头是新颖的生物感应器,它以不同步的方式测量每个像素亮度差异。从事件中恢复亮度是令人兴奋的,因为重建后的图像继承了高动态范围(HDR)和事件高速特性;因此,它们可以用于许多机器人视觉应用,并生成动作缓慢的《人类发展报告》视频。然而,最先进的方法通过培训一个事件到图像的经常性神经网络(RNN)来解决这个问题,这种网络缺乏解释性,难以调和。在这项工作中,我们第一次展示了如何解决运动和亮度估计的共同问题,导致我们制定基于事件的图像重建,作为不训练图像重建 RNN(H) 就能解决的线性反问题。相反,古典和基于学习的图像前期可以用来解决问题,从重建图像中移除艺术品。实验显示,拟议的方法只使用短时间间隔的数据才能产生具有视觉质量的图像。拟议的线性构件和解决方案具有统一性特征,因为可以将它应用成直线性,因为它们也可以使用直线性模型来重建光质的模型。