We investigate semantic guarantees of private learning algorithms for their resilience to training Data Reconstruction Attacks (DRAs) by informed adversaries. To this end, we derive non-asymptotic minimax lower bounds on the adversary's reconstruction error against learners that satisfy differential privacy (DP) and metric differential privacy (mDP). Furthermore, we demonstrate that our lower bound analysis for the latter also covers the high dimensional regime, wherein, the input data dimensionality may be larger than the adversary's query budget. Motivated by the theoretical improvements conferred by metric DP, we extend the privacy analysis of popular deep learning algorithms such as DP-SGD and Projected Noisy SGD to cover the broader notion of metric differential privacy.
翻译:我们研究了私有学习算法对抗知情敌手的训练数据重建攻击(DRA)的语义保证。为此,我们得到了适用于差分隐私(DP)和度量差分隐私(mDP)的、对抗者重建错误的最小极小下界。此外,我们证明了后者的下限分析也覆盖了高维度情况,在此情况下,输入数据的维数可能大于敌手的查询预算。受度量差分隐私领域的理论进步的启发,我们扩展了DP-SGD和Projected Noisy SGD等流行深度学习算法的隐私分析,以覆盖更广泛的度量差分隐私概念。