Privacy-preserving is a key problem for the machine learning algorithm. Spiking neural network (SNN) plays an important role in many domains, such as image classification, object detection, and speech recognition, but the study on the privacy protection of SNN is urgently needed. This study combines the differential privacy (DP) algorithm and SNN and proposes differentially private spiking neural network (DPSNN). DP injects noise into the gradient, and SNN transmits information in discrete spike trains so that our differentially private SNN can maintain strong privacy protection while still ensuring high accuracy. We conducted experiments on MNIST, Fashion-MNIST, and the face recognition dataset Extended YaleB. When the privacy protection is improved, the accuracy of the artificial neural network(ANN) drops significantly, but our algorithm shows little change in performance. Meanwhile, we analyzed different factors that affect the privacy protection of SNN. Firstly, the less precise the surrogate gradient is, the better the privacy protection of the SNN. Secondly, the Integrate-And-Fire (IF) neurons perform better than leaky Integrate-And-Fire (LIF) neurons. Thirdly, a large time window contributes more to privacy protection and performance.
翻译:保护隐私是机器学习算法的一个关键问题。 Spiking 神经网络(SNNN)在许多领域,如图像分类、物体探测和语音识别等,发挥着重要的作用,但迫切需要对SNN的隐私保护进行研究。这项研究结合了不同的隐私算法和SNN(SNN),并提出了有差别的私有神经网络(DPSNN),并提出了有差别的私人神经网络(DPSNNN),在离散的顶点列列中注入噪音,以便我们不同的私人SNNN(SNN)能够保持强大的隐私保护,同时仍然确保高度准确性。我们在MNIST、Fashon-MNIST(F)和面部识别数据集YERB(EB)上进行了实验。当隐私保护得到改善时,人工神经网络(ANN)的精确性能显著下降,但我们的算法显示,在性能上影响SNNN(DPN)的隐私保护的不同因素。首先,超精确的梯度梯度是,SNNN(S)的隐私保护更好。第二,整合和FIF(IF)神经)比大时空-F-F-FIF-F-F-F-F-F-F-F-FIF-F-FIF-F-F-P-P-P-P-P-F-F-F-P-P-P-P-P-PS-P-P-P-P-P-PS-F-F-F-PS-F-F-F-PS-F-F-F-F-F-F-F-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-P-F-PS-F-P-F-F-F-F-F-F-F-F-P-F-F-F-P-P-P-P-P-F-P-P-P-P-P-P-P-P-P-P-P-F-P-P-P-P-P-P-