Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary's capabilities and is not applicable when membership status itself is non-sensitive. In this paper, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods -- Renyi differential privacy and Fisher information leakage -- both offer strong semantic protection against data reconstruction attacks.
翻译:不同隐私被广泛接受为事实上防止ML数据泄漏的方法,传统智慧表明,它提供了强有力的保护,防止隐私受到攻击;然而,目前对DP的语义保障侧重于成员推论,这可能高估对手的能力,当成员地位本身不敏感时不适用;在本文件中,我们得出DP机制在正式威胁模式下针对数据重建攻击培训的第一个语义保障;我们表明,两种不同的隐私核算方法 -- -- Renyi差异隐私和Fisher信息泄漏 -- -- 都为数据重建攻击提供了强有力的语义保护。