Neuromorphic (event-based) image sensors draw inspiration from the human-retina to create an electronic device that can process visual stimuli in a way that closely resembles its biological counterpart. These sensors process information significantly different than the traditional RGB sensors. Specifically, the sensory information generated by event-based image sensors are orders of magnitude sparser compared to that of RGB sensors. The first generation of neuromorphic image sensors, Dynamic Vision Sensor (DVS), are inspired by the computations confined to the photoreceptors and the first retinal synapse. In this work, we highlight the capability of the second generation of neuromorphic image sensors, Integrated Retinal Functionality in CMOS Image Sensors (IRIS), which aims to mimic full retinal computations from photoreceptors to output of the retina (retinal ganglion cells) for targeted feature-extraction. The feature of choice in this work is Object Motion Sensitivity (OMS) that is processed locally in the IRIS sensor. We study the capability of OMS in solving the ego-motion problem of the event-based cameras. Our results show that OMS can accomplish standard computer vision tasks with similar efficiency to conventional RGB and DVS solutions but offers drastic bandwidth reduction. This cuts the wireless and computing power budgets and opens up vast opportunities in high-speed, robust, energy-efficient, and low-bandwidth real-time decision making.
翻译:神经形态图(事件型)图像传感器从人眼视网膜中汲取灵感,创造出一种可以以与生物同步的方式处理视觉刺激的电子设备。这些传感器的信息处理方式与传统的RGB传感器完全不同。具体而言,事件型图像传感器生成的感官信息相对于RGB传感器而言稀疏度高出数个量级。第一代神经形态图图像传感器——动态视觉传感器(DVS)受限于光感受器和第一个视网膜突触所进行的计算。在本研究中,我们重点介绍了第二代神经形态图图像传感器——集成硅质图像传感器的视网膜功能(IRIS),其旨在模仿从光感受器到视网膜输出(视网膜神经元)的全部视网膜计算,以进行目标特征提取。在本次工作中,我们研究了OMS在解决事件型相机中的自我运动问题方面的能力。我们的结果显示,OMS可以像传统的RGB和DVS方案一样有效地完成标准计算机视觉任务,但大幅减少了带宽。这有助于降低无线和计算能耗预算,并在高速、稳健、节能和低带宽的实时决策制定方面开启了广阔的机会。