Spiking neural networks (SNNs), a variant of artificial neural networks (ANNs) with the benefit of energy efficiency, have achieved the accuracy close to its ANN counterparts, on benchmark datasets such as CIFAR10/100 and ImageNet. However, comparing with frame-based input (e.g., images), event-based inputs from e.g., Dynamic Vision Sensor (DVS) can make a better use of SNNs thanks to the SNNs' asynchronous working mechanism. In this paper, we strengthen the marriage between SNNs and event-based inputs with a proposal to consider anytime optimal inference SNNs, or AOI-SNNs, which can terminate anytime during the inference to achieve optimal inference result. Two novel optimisation techniques are presented to achieve AOI-SNNs: a regularisation and a cutoff. The regularisation enables the training and construction of SNNs with optimised performance, and the cutoff technique optimises the inference of SNNs on event-driven inputs. We conduct an extensive set of experiments on multiple benchmark event-based datasets, including CIFAR10-DVS, N-Caltech101 and DVS128 Gesture. The experimental results demonstrate that our techniques are superior to the state-of-the-art with respect to the accuracy and latency.
翻译:人造神经网络(SNNS)是一种具有能源效率的人工神经网络(ANNs)的变体,这种网络在诸如CIFAR10/100和图像网络等基准数据集方面实现了接近ANN对应方的准确性,但是,与基于框架的投入(例如图像)相比,动态愿景传感器(DVS)等基于事件的投入可以更好地利用SNNS,这归功于SNNS的无节奏工作机制。在本文件中,我们加强了SNNS和基于事件的投入之间的结合,建议考虑何时对ANNS作最佳的推断,即AOI-SNNNS,这在推断取得最佳推断结果期间可随时终止。提出了两种新的优化技术,以实现AOI-SNNS:常规化和禁忌。常规化使SNNNNNPIS能够以最佳性工作机制进行培训和建设,而削减技术则使SNNNNNNNNPS在事件驱动的投入上出现偏差,我们用高端技术进行广泛的实验。