Differential privacy (DP) techniques can be applied to the federated learning model to protect data privacy against inference attacks to communication among the learning agents. The DP techniques, however, hinder achieving a greater learning performance while ensuring strong data privacy. In this paper we develop a DP inexact alternating direction method of multipliers algorithm that solves a sequence of subproblems with the objective perturbation by random noises generated from a Laplace distribution. We show that our algorithm provides $\bar{\epsilon}$-DP for every iteration, where $\bar{\epsilon}$ is a privacy parameter controlled by a user. Using MNIST and FEMNIST datasets for the image classification, we demonstrate that our algorithm reduces the testing error by at most $22\%$ compared with the existing DP algorithm, while achieving the same level of data privacy. The numerical experiment also shows that our algorithm converges faster than the existing algorithm.
翻译:不同的隐私(DP) 技术可以适用于联合学习模式, 以保护数据隐私, 防止对学习者之间通信的推断攻击。 但是, DP 技术阻碍在确保强大的数据隐私的同时实现更大的学习性能。 在本文中, 我们开发了一种不精确的乘数算法的反向交替方向法, 该算法解决了一个子问题序列, 其目标通过从 Laplace 分布中产生的随机噪音受到干扰。 我们显示, 我们的算法为每一次迭代提供了$\bar ~ epsilon}$- DP, 其中$\ bar ~ ipsilon} 是一个用户控制的隐私参数。 我们使用 MNIST 和 FEMNIST 数据集进行图像分类, 我们证明我们的算法比现有的DP 算法将测试错误减少最多22美元, 而同时达到同样的数据隐私水平。 数字实验还表明, 我们的算法比现有的算法要快。