This paper presents a new methodology to alleviate the fundamental trade-off between accuracy and latency in spiking neural networks (SNNs). The approach involves decoding confidence information over time from the SNN outputs and using it to develop a decision-making agent that can dynamically determine when to terminate each inference. The proposed method, Dynamic Confidence, provides several significant benefits to SNNs. 1. It can effectively optimize latency dynamically at runtime, setting it apart from many existing low-latency SNN algorithms. Our experiments on CIFAR-10 and ImageNet datasets have demonstrated an average 40% speedup across eight different settings after applying Dynamic Confidence. 2. The decision-making agent in Dynamic Confidence is straightforward to construct and highly robust in parameter space, making it extremely easy to implement. 3. The proposed method enables visualizing the potential of any given SNN, which sets a target for current SNNs to approach. For instance, if an SNN can terminate at the most appropriate time point for each input sample, a ResNet-50 SNN can achieve an accuracy as high as 82.47% on ImageNet within just 4.71 time steps on average. Unlocking the potential of SNNs needs a highly-reliable decision-making agent to be constructed and fed with a high-quality estimation of ground truth. In this regard, Dynamic Confidence represents a meaningful step toward realizing the potential of SNNs.
翻译:本文介绍了一种新的方法学,可以缓解脉冲神经网络(SNNs)在准确性和延迟之间的固有权衡。该方法涉及从SNN输出中随时间解码置信度信息,并使用它来开发一个决策代理,可以动态确定何时终止每个推理。所提出的Dynamic Confidence方法为SNNs提供了几项重要优势。1. 它可以有效地在运行时动态优化延迟,这与许多现有的低延迟SNN算法有所不同。我们在CIFAR-10和ImageNet数据集上的实验表明,在应用Dynamic Confidence后,八个不同设置平均加速了40%。2. Dynamic Confidence中的决策代理易于构建,并且在参数空间中高度稳健,使其极易实现。3. 所提出的方法使得能够可视化任何给定SNN的潜力,为当前SNNs接近设定目标。例如,如果一个SNN可以在每个输入样本的最合适时间点终止,那么ResNet-50 SNN在平均4.71个时间步骤内就可以在ImageNet上实现高达82.47%的准确度。释放SNN的潜力需要构建一个高度可靠的决策代理,并向其提供高质量的基础事实估计。在这方面,Dynamic Confidence向实现SNN潜力迈出了有意义的一步。