Membership inference (MI) attacks affect user privacy by inferring whether given data samples have been used to train a target learning model, e.g., a deep neural network. There are two types of MI attacks in the literature, i.e., these with and without shadow models. The success of the former heavily depends on the quality of the shadow model, i.e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information. In this paper, we propose an MI attack, called BlindMI, which probes the target model and extracts membership semantics via a novel approach, called differential comparison. The high-level idea is that BlindMI first generates a dataset with nonmembers via transforming existing samples into new samples, and then differentially moves samples from a target dataset to the generated, non-member set in an iterative manner. If the differential move of a sample increases the set distance, BlindMI considers the sample as non-member and vice versa. BlindMI was evaluated by comparing it with state-of-the-art MI attack algorithms. Our evaluation shows that BlindMI improves F1-score by nearly 20% when compared to state-of-the-art on some datasets, such as Purchase-50 and Birds-200, in the blind setting where the adversary does not know the target model's architecture and the target dataset's ground truth labels. We also show that BlindMI can defeat state-of-the-art defenses.
翻译:身份推断( MI) 攻击影响用户隐私, 其方法是推断特定数据样本是否被用于培训目标学习模型, 例如深神经网络。 文献中有两种MI攻击类型, 即有阴影和没有阴影模型的。 前者的成功在很大程度上取决于影子模型的质量, 即阴影和目标之间的可转移性; 后者,仅以黑盒探测目标模型为例, 无法对未知数据进行有效推断, 与使用影子模型的MI攻击相比, 无法对未知数据作有效推断, 原因是与地面目标目标成员信息标签的合格样本数量不足。 在本文中, 我们提议进行一次MI攻击, 即称为BreamMI, 探测目标模型, 并通过一种新型方法提取成员语义学, 即阴影和目标目标目标对象之间的可转移性; 后者, 仅以黑盒检测方式对目标数据集进行检测, 与使用阴影模型进行检测, 非成员以迭接方式对未知数据进行检测。 如果标本样本的变差会增加设定的距离, IMI