We consider training models with differential privacy (DP) using mini-batch gradients. The existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD), requires privacy amplification by sampling or shuffling to obtain the best privacy/accuracy/computation trade-offs. Unfortunately, the precise requirements on exact sampling and shuffling can be hard to obtain in important practical scenarios, particularly federated learning (FL). We design and analyze a DP variant of Follow-The-Regularized-Leader (DP-FTRL) that compares favorably (both theoretically and empirically) to amplified DP-SGD, while allowing for much more flexible data access patterns. DP-FTRL does not use any form of privacy amplification. The code is available at https://github.com/google-research/federated/tree/master/dp_ftrl and https://github.com/google-research/DP-FTRL .
翻译:我们考虑采用有差别的隐私(DP)的培训模式,使用微小的梯度。现有的最先进的“有区别的私人小石头梯度”(DP-SGD)要求通过取样或重新排列来扩大隐私,以获得最佳的隐私/准确性/剖析取舍;不幸的是,在重要的实际情况下,特别是联邦学习(FL),很难获得精确的取样和平移要求。我们设计和分析“追踪-Regulalized-Leader”(DP-FTRL)的DP变种,这种变种优于(理论上和经验上)扩大DP-SGD,同时允许更灵活得多的数据访问模式。DP-FTRL不使用任何形式的隐私增密。该代号可在https://github.com/google-research/federated/tree/master/dp_ftrl和https://github.com/gogle-reearchear/DP-FTRL查阅。