Pairwise learning focuses on learning tasks with pairwise loss functions, which depend on pairs of training instances, and naturally fits for modeling relationships between pairs of samples. In this paper, we focus on the privacy of pairwise learning and propose a new differential privacy paradigm for pairwise learning, based on gradient perturbation. We analyze the privacy guarantees from two points of view: the $\ell_2$-sensitivity and the moments accountant method. We further analyze the generalization error, the excess empirical risk, and the excess population risk of our proposed method and give corresponding bounds. By introducing algorithmic stability theory to pairwise differential privacy, our theoretical analysis does not require convex pairwise loss functions, which means that our method is general to both convex and non-convex conditions. Under these circumstances, the utility bounds are better than previous bounds under convexity or strongly convexity assumption, which is an attractive result.
翻译:光学学习侧重于学习任务,同时具有双向损失功能,这些功能取决于对等培训实例,并且自然适合对等样本之间的建模关系。在本文中,我们侧重于对等学习的隐私,并根据梯度扰动,为对等学习提出一个新的差异性隐私范式。我们从两个角度分析了隐私保障: $\ ell_ 2$敏感度和瞬间会计方法。我们进一步分析了我们拟议方法的普遍错误、 超额经验风险和超额人口风险,并给出了相应的界限。通过引入算法稳定性理论来对等差异性隐私,我们的理论分析并不需要对等式对等性对等损失功能,这意味着我们的方法对二次曲线和非对等条件都是通用的。在这样的情况下,效用界限比先前在共性或强烈共性假设下的界限要好,这是一个有吸引力的结果。