Vision-based autonomous navigation systems rely on fast and accurate object detection algorithms to avoid obstacles. Algorithms and sensors designed for such systems need to be computationally efficient, due to the limited energy of the hardware used for deployment. Biologically inspired event cameras are a good candidate as a vision sensor for such systems due to their speed, energy efficiency, and robustness to varying lighting conditions. However, traditional computer vision algorithms fail to work on event-based outputs, as they lack photometric features such as light intensity and texture. In this work, we propose a novel technique that utilizes the temporal information inherently present in the events to efficiently detect moving objects. Our technique consists of a lightweight spiking neural architecture that is able to separate events based on the speed of the corresponding objects. These separated events are then further grouped spatially to determine object boundaries. This method of object detection is both asynchronous and robust to camera noise. In addition, it shows good performance in scenarios with events generated by static objects in the background, where existing event-based algorithms fail. We show that by utilizing our architecture, autonomous navigation systems can have minimal latency and energy overheads for performing object detection.
翻译:以视觉为基础的自主导航系统依靠快速和准确的天体探测算法来避免障碍。由于用于部署的硬件的能量有限,为这种系统设计的测算仪和传感器需要具有计算效率。生物激励事件相机由于其速度、能源效率和不同照明条件的稳健性,是这类系统的一个视觉传感器的好选择。然而,传统的计算机愿景算法无法对基于事件的产出起作用,因为它们缺乏光强度和质谱等光度特征。在这项工作中,我们提出了一种新型技术,利用事件固有的时间信息来有效探测移动物体。我们的技术包括一个能够根据相应物体的速度将事件分开的轻量性神经结构。这些分离的事件随后在空间上进一步分组,以决定物体的界限。这种物体探测方法对摄像噪音来说既不灵敏又有力。此外,它表明在由静态物体产生的事件发生的情况下,在现有的基于事件的算法失败的情况下,其表现良好。我们通过利用我们的建筑、自主导航系统可以最低限度地探测和对天体上进行能量分析。