We investigate semantic guarantees of private learning algorithms for their resilience to training Data Reconstruction Attacks (DRAs) by informed adversaries. To this end, we derive non-asymptotic minimax lower bounds on the adversary's reconstruction error against learners that satisfy differential privacy (DP) and metric differential privacy (mDP). Furthermore, we demonstrate that our lower bound analysis for the latter also covers the high dimensional regime, wherein, the input data dimensionality may be larger than the adversary's query budget. Motivated by the theoretical improvements conferred by metric DP, we extend the privacy analysis of popular deep learning algorithms such as DP-SGD and Projected Noisy SGD to cover the broader notion of metric differential privacy.
翻译:摘要:我们研究了私有学习算法的语义保证,以测试其对有信息敌手的训练数据重构攻击(DRAs)的韧性。为此,我们推导出了敌手的最小最大下限重构误差,这些误差基于满足差分隐私(DP)和度量差分隐私(mDP)的学习器。此外,我们还展示了我们对后者的下限分析也覆盖高维度范围,这是指输入数据维度可能大于敌手的查询预算。受度量差分隐私提供的理论改进启发,我们将流行的深度学习算法(如DP-SGD和Projected Noisy SGD)的隐私分析扩展到覆盖更广泛的度量差分隐私上。