We study the privatization of distributed learning and optimization strategies. We focus on differential privacy schemes and study their effect on performance. We show that the popular additive random perturbation scheme degrades performance because it is not well-tuned to the graph structure. For this reason, we exploit two alternative graph-homomorphic constructions and show that they improve performance while guaranteeing privacy. Moreover, contrary to most earlier studies, the gradient of the risks is not assumed to be bounded (a condition that rarely holds in practice; e.g., quadratic risk). We avoid this condition and still devise a differentially private scheme with high probability. We examine optimization and learning scenarios and illustrate the theoretical findings through simulations.
翻译:我们研究分布式学习和优化战略的私有化问题,我们注重不同的隐私计划,并研究其对业绩的影响,我们证明流行的添加添加型随机扰动计划会降低业绩,因为它与图表结构不协调,因此,我们利用两种不同的图形图形结构,表明它们改善业绩,同时保障隐私。此外,与大多数早期研究相反,风险梯度没有被假定为受约束(这个条件在实际中很少存在,例如二次风险 ) 。我们避免了这一条件,仍然设计了一种差别化的私人计划,概率很高。我们研究了优化和学习情景,并通过模拟来说明理论结论。