Fast neuromorphic event-based vision sensors (Dynamic Vision Sensor, DVS) can be combined with slower conventional frame-based sensors to enable higher-quality inter-frame interpolation than traditional methods relying on fixed motion approximations using e.g. optical flow. In this work we present a new, advanced event simulator that can produce realistic scenes recorded by a camera rig with an arbitrary number of sensors located at fixed offsets. It includes a new configurable frame-based image sensor model with realistic image quality reduction effects, and an extended DVS model with more accurate characteristics. We use our simulator to train a novel reconstruction model designed for end-to-end reconstruction of high-fps video. Unlike previously published methods, our method does not require the frame and DVS cameras to have the same optics, positions, or camera resolutions. It is also not limited to objects a fixed distance from the sensor. We show that data generated by our simulator can be used to train our new model, leading to reconstructed images on public datasets of equivalent or better quality than the state of the art. We also show our sensor generalizing to data recorded by real sensors.
翻译:快速神经形态事件视觉传感器(动态视觉传感器、DVS)可以与较慢的常规框架感应器(动态图像传感器、DVS)结合使用,以便能够使用光学流等使用固定运动近似值进行比传统方法更高质量的跨框架干涉。在这项工作中,我们提出了一个新的高级事件模拟器,该模拟器可以产生由照相机记录的现实场景,该模拟器具有任意数量的传感器,位于固定偏移处。它包括一个新的可配置框架图像传感器模型,具有现实的图像质量降低效果,以及一个具有更精确特性的扩展DVS模型。我们使用模拟器来培训一个新型的重建模型,设计用于对高亮度视频进行端到端重建。与以前公布的方法不同,我们的方法并不要求框架和DVS摄像机具有相同的光学、位置或摄像分辨率。它也不局限于与传感器的固定距离。我们模拟器生成的数据可用于培训我们的新模型,从而导致对等或质量比艺术状态的公共数据集进行重建。我们还通过真实的传感器显示我们的一般感官数据。