In this paper, we focus on facilitating differentially private quantized communication between the clients and server in federated learning (FL). Towards this end, we propose to have the clients send a \textit{private quantized} version of only the \textit{unit vector} along the change in their local parameters to the server, \textit{completely throwing away the magnitude information}. We call this algorithm \texttt{DP-NormFedAvg} and show that it has the same order-wise convergence rate as \texttt{FedAvg} on smooth quasar-convex functions (an important class of non-convex functions for modeling optimization of deep neural networks), thereby establishing that discarding the magnitude information is not detrimental from an optimization point of view. We also introduce QTDL, a new differentially private quantization mechanism for unit-norm vectors, which we use in \texttt{DP-NormFedAvg}. QTDL employs \textit{discrete} noise having a Laplacian-like distribution on a \textit{finite support} to provide privacy. We show that under a growth-condition assumption on the per-sample client losses, the extra per-coordinate communication cost in each round incurred due to privacy by our method is $\mathcal{O}(1)$ with respect to the model dimension, which is an improvement over prior work. Finally, we show the efficacy of our proposed method with experiments on fully-connected neural networks trained on CIFAR-10 and Fashion-MNIST.
翻译:在本文中, 我们侧重于促进客户和服务器在联合学习中进行有区别的私人量化通信 。 为此, 我们建议客户在本地参数变化的同时, 向服务器发送一个只有\ textit{ unit矢量的版本,\ textit{ unit矢量 。 我们称此算法为\ textt{ DP- NormFedAvg}, 并显示它在平滑的 夸撒- 康韦克斯 功能上有着与\ textt{ FedAvg} 相同的顺序顺序趋同率。 为此, 我们建议客户在平坦的 类非convex 函数中发送一个\ textitle{ compal 量的版本, 从而确定放弃数量信息不会因最优化点而受到损害 。 我们还引入了 QTDL, 一种对单位- 北调矢量矢量的新的有差别的私人量化机制, 我们在\ t{ DP- NormFedAvg} 显示, QTDL 使用一个在平流- trueal real real respal respal respal resmal- sal rolation rolation rolation rolation rolationslationslationslupl) 。