We deal with a general distributed constrained online learning problem with privacy over time-varying networks, where a class of nondecomposable objectives are considered. Under this setting, each node only controls a part of the global decision, and the goal of all nodes is to collaboratively minimize the global cost over a time horizon $T$ while guarantees the security of the transmitted information. For such problems, we first design a novel generic algorithm framework, named as DPSDA, of differentially private distributed online learning using the Laplace mechanism and the stochastic variants of dual averaging method. Note that in the dual updates, all nodes of DPSDA employ the noise-corrupted gradients for more generality. Then, we propose two algorithms, named as DPSDA-C and DPSDA-PS, under this framework. In DPSDA-C, the nodes implement a circulation-based communication in the primal updates so as to alleviate the disagreements over time-varying undirected networks. In addition, for the extension to time-varying directed ones, the nodes implement the broadcast-based push-sum dynamics in DPSDA-PS, which can achieve average consensus over arbitrary directed networks. Theoretical results show that both algorithms attain an expected regret upper bound in $\mathcal{O}( \sqrt{T} )$ when the objective function is convex, which matches the best utility achievable by cutting-edge algorithms. Finally, numerical experiment results on both synthetic and real-world datasets verify the effectiveness of our algorithms.
翻译:我们处理的是一个普遍分布的有限在线学习问题,是时间变化网络的隐私问题,其中考虑的是一组不可分的目标。在这一背景下,每个节点只控制全球决定的一部分,所有节点的目标是在一个时间跨度范围内合作尽量减少全球成本,同时保证所传送信息的安全。对于这些问题,我们首先设计一个名为DPSDA的新型通用算法框架,名为DPSDA,使用拉贝机制和双向平均法的随机变体,以不同方式私下进行在线学习。请注意,在双重更新中,DPSDA的所有节点都使用噪音干扰的梯度,以便更笼统地处理。然后,我们在此框架下提出两个算法,称为DPSDA-C和DPSDA-PS,目的是在保证传送信息的安全的同时,在初始更新时,节点将基于广播的推算动态推动动态用于更精确性地。在硬性水平轨道上,最终地显示我们平均的智能数据,最终将最终实现。