Edge computing solutions that enable the extraction of high level information from a variety of sensors is in increasingly high demand. This is due to the increasing number of smart devices that require sensory processing for their application on the edge. To tackle this problem, we present a smart vision sensor System on Chip (Soc), featuring an event-based camera and a low power asynchronous spiking Convolutional Neuronal Network (sCNN) computing architecture embedded on a single chip. By combining both sensor and processing on a single die, we can lower unit production costs significantly. Moreover, the simple end-to-end nature of the SoC facilitates small stand-alone applications as well as functioning as an edge node in a larger systems. The event-driven nature of the vision sensor delivers high-speed signals in a sparse data stream. This is reflected in the processing pipeline, focuses on optimising highly sparse computation and minimising latency for 9 sCNN layers to $3.36\mu s$. Overall, this results in an extremely low-latency visual processing pipeline deployed on a small form factor with a low energy budget and sensor cost. We present the asynchronous architecture, the individual blocks, the sCNN processing principle and benchmark against other sCNN capable processors.
翻译:随着越来越多的智能设备需要边缘计算解决方案从各种传感器中提取高级别信息,对于边缘计算技术的需求已经越来越高。为了解决这个问题,我们提出了一种智能视觉传感器片上系统(SoC),具有事件驱动相机和低功耗异步脉冲卷积神经网络(sCNN)计算架构并置于单一芯片上。通过将传感器和处理器结合在单个芯片上,我们可以显著降低单元生产成本。此外,SoC的端到端简单性有助于小型独立应用以及作为更大系统中的边缘节点。视觉传感器的事件驱动性能在稀疏数据流中提供了高速信号。这反映在处理管道中,重点优化高度稀疏的计算并将延迟降至9个sCNN层的$3.36\mu s$。总体而言,这导致了在小尺寸低能耗和低传感器成本上部署极低延迟的视觉处理管道。我们介绍了异步架构、各个模块、sCNN处理原理,并针对其他支持sCNN的处理器进行了基准测试。