Machine Learning (ML) architectures have been applied to several applications that involve sensitive data, where a guarantee of users' data privacy is required. Differentially Private Stochastic Gradient Descent (DPSGD) is the state-of-the-art method to train privacy-preserving models. However, DPSGD comes at a considerable accuracy loss leading to sub-optimal privacy/utility trade-offs. Towards investigating new ground for better privacy-utility trade-off, this work questions; (i) if models' hyperparameters have any inherent impact on ML models' privacy-preserving properties, and (ii) if models' hyperparameters have any impact on the privacy/utility trade-off of differentially private models. We propose a comprehensive design space exploration of different hyperparameters such as the choice of activation functions, the learning rate and the use of batch normalization. Interestingly, we found that utility can be improved by using Bounded RELU as activation functions with the same privacy-preserving characteristics. With a drop-in replacement of the activation function, we achieve new state-of-the-art accuracy on MNIST (96.02\%), FashionMnist (84.76\%), and CIFAR-10 (44.42\%) without any modification of the learning procedure fundamentals of DPSGD.
翻译:机器学习(ML)架构已应用于涉及敏感数据的若干应用,其中需要用户数据隐私的保障。不同的私人软体渐变底(DPSGD)是培训隐私保护模型的最先进方法。然而,DPSGD出现了相当的精确性损失,导致隐私/功用权交易的次优性差。为了调查改进隐私-公用事业交易的新基础,这个工作问题;(一)如果模型的超参数对ML模型的隐私保护特性产生任何内在影响,以及(二)模型的超参数对不同私人模型的隐私/实用性交易产生任何影响。我们建议对不同的超参数进行全面的设计空间探索,例如激活功能的选择、学习率和使用批次正常化。有趣的是,我们发现利用Bounded D RELU作为以同样的隐私保护特性启动功能可以提高效用。随着启动功能的下降,我们实现了对不同私人模型的隐私/实用性交易的隐私/实用性交易,我们实现了对不同私人模型的隐私/通用性交易的任何新设计空间探索,例如激活功能的选择、学习率和使用批次标准。我们发现,通过使用Bounded DRELU(IS-M_Q_Q_Q_Q_Q_Q_Q_ISISISISISISISISIS_Q_Q_Q_Q_ISISISISISD_Q_Q_Q_Q_Q_QISISISISISISISISISIS_Q_Q_Q_Q_Q_QISISIS_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_QISISISIS_Q_Q_Q_Q_Q_Q_Q_QISISISISISIS_Q_Q_Q_Q_Q_Q_QIS_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_ISISISISISISISISISISISISISD_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_QISISISISIS_Q_Q_Q_Q_</s>