Propose-Test-Release (PTR) is a differential privacy framework that works with local sensitivity of functions, instead of their global sensitivity. This framework is typically used for releasing robust statistics such as median or trimmed mean in a differentially private manner. While PTR is a common framework introduced over a decade ago, using it in applications such as robust SGD where we need many adaptive robust queries is challenging. This is mainly due to the lack of Renyi Differential Privacy (RDP) analysis, an essential ingredient underlying the moments accountant approach for differentially private deep learning. In this work, we generalize the standard PTR and derive the first RDP bound for it when the target function has bounded global sensitivity. We show that our RDP bound for PTR yields tighter DP guarantees than the directly analyzed $(\eps, \delta)$-DP. We also derive the algorithm-specific privacy amplification bound of PTR under subsampling. We show that our bound is much tighter than the general upper bound and close to the lower bound. Our RDP bounds enable tighter privacy loss calculation for the composition of many adaptive runs of PTR. As an application of our analysis, we show that PTR and our theoretical results can be used to design differentially private variants for byzantine robust training algorithms that use robust statistics for gradients aggregation. We conduct experiments on the settings of label, feature, and gradient corruption across different datasets and architectures. We show that PTR-based private and robust training algorithm significantly improves the utility compared with the baseline.
翻译:提议测试- 释放( PTR) 是一个不同的隐私框架, 它与本地功能的敏感度不同, 而不是其全球敏感度。 这个框架通常用于发布稳健的统计数据, 如中位值或刻度平均值, 以不同的私人方式。 虽然 PTR是十年前引入的一个共同框架, 在诸如强势的 SGD 等应用程序中, 我们需要许多适应性强的查询, 具有挑战性。 这主要是因为缺乏Renyi 差异隐私( RDP) 分析, 这是用于不同私人深度学习的时段会计方法的一个基本成分。 在这项工作中, 我们普及了标准的 PTR 标准, 当目标功能束缚了全球敏感度时, 我们为它带来了第一个 RDP 。 我们显示, 我们受约束的 RDP 会比直接分析$( eps,\delta) $( $) - DP。 我们还在子样本中获取具体算法的隐私增强度。 我们的界限比一般的稳健度结构更紧, 更接近于更低的底线。 我们的 RDP 能够更严格地计算 PTR 的隐私损失, 比较的适应 PTR 的适应性运行的运行,, 我们的模型的模型可以显示我们用于 的模型的模型的模型的模型的模型的 的 的 。