Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database. More recently, extensions to individual subjects or their attributes, have been introduced. Under the individual/per-instance DP interpretation, we study the connection between the per-subject gradient norm in DP neural networks and individual privacy loss and introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS), which allows one to apportion the subject's privacy loss to their input attributes. We experimentally show how this enables the identification of sensitive attributes and of subjects at high risk of data reconstruction.
翻译:差异隐私(DP)通常是在数据库中作为针对所有个人的最坏情况隐私保障而拟订的,最近还引入了个人主体或其属性的扩展,根据个人/每个主体的DP解释,我们研究了DP神经网络的每个主题梯度规范与个人隐私损失之间的联系,并引入了称为隐私损失-投入可感性的新颖指标,允许人们将当事人的隐私损失与其输入属性分配。我们实验性地展示了这如何能够识别敏感属性和数据重建高风险主体。