In this paper, we deal with a general distributed constrained online learning problem with privacy over time-varying networks, where a class of nondecomposable objective functions are considered. Under this setting, each node only controls a part of the global decision variable, and the goal of all nodes is to collaboratively minimize the global objective over a time horizon $T$ while guarantees the security of the transmitted information. For such problems, we first design a novel generic algorithm framework, named as DPSDA, of differentially private distributed online learning using the Laplace mechanism and the stochastic variants of dual averaging method. Then, we propose two algorithms, named as DPSDA-C and DPSDA-PS, under this framework. Theoretical results show that both algorithms attain an expected regret upper bound in $\mathcal{O}( \sqrt{T} )$ when the objective function is convex, which matches the best utility achievable by cutting-edge algorithms. Finally, numerical experiment results on both real-world and randomly generated datasets verify the effectiveness of our algorithms.
翻译:在本文中,我们处理一个普遍分布的有限在线学习问题,涉及时间变化网络的隐私,其中考虑的是一组不可分的客观功能。在此背景下,每个节点只控制全球决定变量的一部分,所有节点的目标是在一个时间跨度内合作尽量减少全球目标$T$,同时保证所传输信息的安全。对于这些问题,我们首先设计一个名为DPSDA的新型通用算法框架,用于使用拉普尔机制和双均率方法的随机变异的有区别的私人在线学习。然后,我们在此框架下提出两个算法,称为DPSDA-C和DPSDA-PS。理论结果显示,当目标函数为convex时,两种算法都达到了预期的负数上限,即$\mathcal{O}(\qrt{T}}美元,这与尖端算法所能达到的最佳效用相匹配。最后,在现实世界和随机生成的数据集上的数字实验结果可以验证我们的算法的有效性。