We investigate semantic guarantees of private learning algorithms for their resilience to training Data Reconstruction Attacks (DRAs) by informed adversaries. To this end, we derive non-asymptotic minimax lower bounds on the adversary's reconstruction error against learners that satisfy differential privacy (DP) and metric differential privacy (mDP). Furthermore, we demonstrate that our lower bound analysis for the latter also covers the high dimensional regime, wherein, the input data dimensionality may be larger than the adversary's query budget. Motivated by the theoretical improvements conferred by metric DP, we extend the privacy analysis of popular deep learning algorithms such as DP-SGD and Projected Noisy SGD to cover the broader notion of metric differential privacy.
翻译:摘要:我们研究了隐私学习算法的语义保证,以抵抗知情攻击者对训练数据重构攻击。为此,我们对满足差分隐私(DP)和度量差分隐私(mDP)的学习器的敌人重构误差导出了非渐近极小化下界。此外,我们展示了对于后者,我们的下界分析也覆盖了高维度情况,其中输入数据维度可能大于敌人的查询预算。受度量差分隐私带来的理论改进的启发,我们扩展了流行的深度学习算法,如DP-SGD和Projected Noisy SGD的隐私分析,以涵盖更广泛的度量差分隐私概念。