How can we bring both privacy and energy-efficiency to a neural system on edge devices? In this paper, we propose PrivateSNN, which aims to build low-power Spiking Neural Networks (SNNs) from a pre-trained ANN model without leaking sensitive information contained in a dataset. Here, we tackle two types of leakage problems: 1) Data leakage caused when the networks access real training data during an ANN-SNN conversion process. 2) Class leakage is the concept of leakage caused when class-related features can be reconstructed from network parameters. In order to address the data leakage issue, we generate synthetic images from the pre-trained ANNs and convert ANNs to SNNs using generated images. However, converted SNNs are still vulnerable with respect to the class leakage since the weight parameters have the same (or scaled) value with respect to ANN parameters. Therefore, we encrypt SNN weights by training SNNs with a temporal spike-based learning rule. Updating weight parameters with temporal data makes networks difficult to be interpreted in the spatial domain. We observe that the encrypted PrivateSNN can be implemented not only without the huge performance drop (less than ~5%) but also with significant energy-efficiency gain (about x60 compared to the standard ANN). We conduct extensive experiments on various datasets including CIFAR10, CIFAR100, and TinyImageNet, highlighting the importance of privacy-preserving SNN training.
翻译:如何将隐私和能源效率都引入边缘装置的神经系统? 在本文中,我们建议PretarySNNN, 目的是从经过预先训练的ANN模型中建立低功率的Spiking神经网络(SNN), 而不泄漏数据集中所含的敏感信息。 在这里, 我们处理两类渗漏问题:(1) 当网络在ANN-SNN转换过程中获取真正的培训数据时, 数据渗漏会如何将隐私和能源效率引入边缘装置的神经系统? (2) 当从网络参数中重建与阶级有关的功能时, 类渗漏就是渗漏的概念。 为了解决数据渗漏问题, 我们从预先训练的ANNW生成合成图像, 并且利用生成的图像将ANNNN转换为S。 然而, 转换的SNNNNN仍然在类泄漏方面脆弱。 因为重量参数与ANNN的参数具有相同的(或缩放)价值。 因此,我们用一个基于时间加峰的学习规则对SNNNS进行加密的重量参数。更新使网络难以在空间域内解释。 我们观察到, 加密的私人SNNNNNNN的GGGGGGGGGGGGGGGGGGGGGGGG 和S 使用生成的大规模性数据,, 而不是大量的BS-S- dVDS- dVDS- s