In recent years, there has been a growing interest in realizing methodologies to integrate more and more computation at the level of the image sensor. The rising trend has seen an increased research interest in developing novel event cameras that can facilitate CNN computation directly in the sensor. However, event-based cameras are not generally available in the market, limiting performance exploration on high-level models and algorithms. This paper presents an event camera simulator that can be a potent tool for hardware design prototyping, parameter optimization, attention-based innovative algorithm development, and benchmarking. The proposed simulator implements a distributed computation model to identify relevant regions in an image frame. Our simulator's relevance computation model is realized as a collection of modules and performs computations in parallel. The distributed computation model is configurable, making it highly useful for design space exploration. The Rendering engine of the simulator samples frame-regions only when there is a new event. The simulator closely emulates an image processing pipeline similar to that of physical cameras. Our experimental results show that the simulator can effectively emulate event vision with low overheads.
翻译:近年来,人们越来越有兴趣在图像传感器一级实现越来越多的集成计算方法,这种上升的趋势表明,人们越来越有兴趣研究开发能够直接便利CNN传感器计算的新事件相机,然而,在市场上一般没有基于事件的相机,这限制了高级模型和算法的性能探索。本文展示了一种事件相机模拟器,它可以成为硬件设计原型、参数优化、基于关注的创新算法开发以及基准的有力工具。拟议的模拟器采用一个分布式计算模型,在图像框中识别相关区域。我们的模拟器的相关性计算模型作为模块集而实现,并平行进行计算。分布式计算模型是可配置的,因此对空间探索的设计非常有用。模拟器样品框架区域模拟器只有在出现新事件时才能成为有效的工具。模拟器密切模仿类似于物理相机的图像处理管道。我们的实验结果显示,模拟器可以有效地与低空头模拟事件进行模拟。