Event cameras sense brightness changes and output binary asynchronous event streams, attracting increasing attention. Their bio-inspired dynamics align well with spiking neural networks (SNNs), offering a promising energy-efficient alternative to conventional vision systems. However, SNNs remain costly to train due to temporal coding, which limits their practical deployment. To alleviate the high training cost of SNNs, we introduce \textbf{PACE} (Phase-Aligned Condensation for Events), the first dataset distillation framework to SNNs and event-based vision. PACE distills a large training dataset into a compact synthetic one that enables fast SNN training, which is achieved by two core modules: \textbf{ST-DSM} and \textbf{PEQ-N}. ST-DSM uses residual membrane potentials to densify spike-based features (SDR) and to perform fine-grained spatiotemporal matching of amplitude and phase (ST-SM), while PEQ-N provides a plug-and-play straight through probabilistic integer quantizer compatible with standard event-frame pipelines. Across DVS-Gesture, CIFAR10-DVS, and N-MNIST datasets, PACE outperforms existing coreset selection and dataset distillation baselines, with particularly strong gains on dynamic event streams and at low or moderate IPC. Specifically, on N-MNIST, it achieves \(84.4\%\) accuracy, about \(85\%\) of the full training set performance, while reducing training time by more than \(50\times\) and storage cost by \(6000\times\), yielding compact surrogates that enable minute-scale SNN training and efficient edge deployment.
翻译:事件相机通过感知亮度变化并输出二进制异步事件流,正受到越来越多的关注。其仿生动力学特性与脉冲神经网络(SNNs)高度契合,为传统视觉系统提供了一种有前景的节能替代方案。然而,由于时间编码机制,SNNs的训练成本仍然较高,这限制了其实际部署。为缓解SNNs的高训练成本,我们提出了\\textbf{PACE}(事件相位对齐压缩),这是首个面向SNNs与事件视觉的数据集蒸馏框架。PACE将大规模训练数据集压缩为紧凑的合成数据集,从而实现快速SNN训练,该框架通过两个核心模块实现:\\textbf{ST-DSM}与\\textbf{PEQ-N}。ST-DSM利用残余膜电位对脉冲特征(SDR)进行稠密化处理,并执行幅值与相位的细粒度时空匹配(ST-SM);而PEQ-N则提供即插即用的直通式概率整数量化器,兼容标准事件帧处理流程。在DVS-Gesture、CIFAR10-DVS和N-MNIST数据集上的实验表明,PACE在动态事件流及低/中IPC条件下显著优于现有的核心集选择与数据集蒸馏基线方法。具体而言,在N-MNIST数据集上,PACE取得了\\(84.4\\%\\)的准确率,达到完整训练集性能的约\\(85\\%\\),同时将训练时间减少超过\\(50\\times\\),存储成本降低\\(6000\\times\\),生成的紧凑替代数据集可实现分钟级SNN训练与高效边缘部署。