Deep learning has achieved overwhelming success, spanning from discriminative models to generative models. In particular, deep generative models have facilitated a new level of performance in a myriad of areas, ranging from media manipulation to sanitized dataset generation. Despite the great success, the potential risks of privacy breach caused by generative models have not been analyzed systematically. In this paper, we focus on membership inference attack against deep generative models that reveals information about the training data used for victim models. Specifically, we present the first taxonomy of membership inference attacks, encompassing not only existing attacks but also our novel ones. In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models. Moreover, we provide a theoretically grounded attack calibration technique, which consistently boosts the attack performance in all cases, across different attack settings, data modalities, and training configurations. We complement the systematic analysis of attack performance by a comprehensive experimental study, that investigates the effectiveness of various attacks w.r.t. model type and training configurations, over three diverse application scenarios (i.e., images, medical data, and location data).
翻译:深层次的学习取得了巨大的成功,从歧视模式到基因模型,从歧视模式到基因模型,特别是深层次的基因模型为从媒体操纵到净化数据集生成等众多领域的新水平的绩效提供了便利。尽管取得了巨大成功,但基因模型造成的侵犯隐私的潜在风险还没有系统地分析。在本文中,我们侧重于对揭示受害者模型培训数据信息的深层次基因模型的会员推论攻击。具体地说,我们介绍了成员推断攻击的第一个分类,不仅包括现有的攻击,也包括我们的新颖的攻击。此外,我们提出了第一个可以在大范围环境中即燃的通用攻击模型,并适用于各种深层次基因模型。此外,我们提供了一种理论上的、有根据的攻击校准技术,这种技术不断促进各种袭击的性能,跨越不同的攻击环境、数据模式和培训配置。我们通过全面实验研究来补充对攻击性表现的系统分析,该研究调查了各种攻击(例如,数据、数据、数据、和培训)在三种不同应用情景(即,地点)下的各种攻击的效果。