Data is the foundation of most science. Unfortunately, sharing data can be obstructed by the risk of violating data privacy, impeding research in fields like healthcare. Synthetic data is a potential solution. It aims to generate data that has the same distribution as the original data, but that does not disclose information about individuals. Membership Inference Attacks (MIAs) are a common privacy attack, in which the attacker attempts to determine whether a particular real sample was used for training of the model. Previous works that propose MIAs against generative models either display low performance -- giving the false impression that data is highly private -- or need to assume access to internal generative model parameters -- a relatively low-risk scenario, as the data publisher often only releases synthetic data, not the model. In this work we argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution. We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model. Experimentally we show that DOMIAS is significantly more successful at MIA than previous work, especially at attacking uncommon samples. The latter is disconcerting since these samples may correspond to underrepresented groups. We also demonstrate how DOMIAS' MIA performance score provides an interpretable metric for privacy, giving data publishers a new tool for achieving the desired privacy-utility trade-off in their synthetic data.
翻译:不幸的是,共享数据可能会因侵犯数据隐私的风险而受阻,从而妨碍在保健等领域的研究。合成数据是一个潜在的解决方案。它旨在生成与原始数据同样分布但并不透露个人信息的数据。会员推断攻击(MIAs)是一种常见的隐私攻击,攻击者试图确定是否使用特定真实样本来培训模型,攻击者试图在这种攻击中确定某个特定真实样本是否用于培训模型。以前,针对基因化模型提议MIA的工程显示低性能 -- -- 给人以数据高度私有的假印象 -- -- 或需要接受内部基因化模型参数 -- -- 一种相对低风险的设想,因为数据发布者往往只提供合成数据,而不是模型。在这项工作中,我们主张采用现实的MIA设置,假设攻击者对基本数据分布有一定的了解。我们提议DOMAS,一个基于密度的MIA模型,目的是通过将目标定在本地的基因化模型来推导出成员。我们实验表明,DOMAS在MIA上比以前的工作要成功得多,特别是在攻击非正常样本时。</s>