We study membership inference in settings where some of the assumptions typically used in previous research are relaxed. First, we consider skewed priors, to cover cases such as when only a small fraction of the candidate pool targeted by the adversary are actually members and develop a PPV-based metric suitable for this setting. This setting is more realistic than the balanced prior setting typically considered by researchers. Second, we consider adversaries that select inference thresholds according to their attack goals and develop a threshold selection procedure that improves inference attacks. Since previous inference attacks fail in imbalanced prior setting, we develop a new inference attack based on the intuition that inputs corresponding to training set members will be near a local minimum in the loss function, and show that an attack that combines this with thresholds on the per-instance loss can achieve high PPV even in settings where other attacks appear to be ineffective. Code for our experiments can be found here: https://github.com/bargavj/EvaluatingDPML.
翻译:我们研究的是先前研究中通常使用的一些假设已经放松的环境下的成员推论。 首先,我们考虑有偏差的前科,以涵盖一些案例,例如对手所针对的候选人后备库中只有一小部分是实际成员的情况,并制定适合这一环境的PPV衡量标准。这种环境比研究人员通常考虑的平衡的先前环境更为现实。第二,我们考虑根据攻击目标选择推论阈值的对手,并制定一个改进推论攻击的门槛选择程序。由于先前的推论在先前设定的不平衡中失败,我们根据以下直觉发展了一种新的推论攻击:在损失功能中,与培训既定成员相关的投入将接近当地最低水平,并表明,将这种投入与每次损失的阈值相结合的攻击即使在其他攻击似乎无效的情况下,也可能达到高PPV。我们实验的代码可以在这里找到:https://github.com/bargavj/EvaluatingDPML。