Pairwise learning focuses on learning tasks with pairwise loss functions, depends on pairs of training instances, and naturally fits for modeling relationships between pairs of samples. In this paper, we focus on the privacy of pairwise learning and propose a new differential privacy paradigm for pairwise learning, based on gradient perturbation. Except for the privacy guarantees, we also analyze the excess population risk and give corresponding bounds under both expectation and high probability conditions. We use the \textit{on-average stability} and the \textit{pairwise locally elastic stability} theories to analyze the expectation bound and the high probability bound, respectively. Moreover, our analyzed utility bounds do not require convex pairwise loss functions, which means that our method is general to both convex and non-convex conditions. Under these circumstances, the utility bounds are similar to (or better than) previous bounds under convexity or strongly convexity assumption, which are attractive results.
翻译:Pairwith 学习侧重于具有双向损失功能的学习任务, 取决于对等培训实例, 并且自然适合对等样本之间的建模关系。 在本文中, 我们侧重于对等学习的隐私, 并基于梯度扰动, 为对等学习提出一个新的差异性隐私模式。 除了隐私保障, 我们还分析过剩的人口风险, 并在预期和高概率条件下给出相应的界限 。 我们使用\ textit{ on- 平均稳定性} 和\ textit{ pairwith 本地弹性稳定性} 理论分别分析预期约束和高概率约束 。 此外, 我们分析过的效用约束并不需要对等式的双向损失功能, 这意味着我们的方法一般就是连接和非连接条件。 在这样的情况下, 效用界限类似于( 或好于) 连接或强烈连接假设下的( ) 之前的界限, 它们是有吸引力的结果 。