Tabular data typically contains private and important information; thus, precautions must be taken before they are shared with others. Although several methods (e.g., differential privacy and k-anonymity) have been proposed to prevent information leakage, in recent years, tabular data synthesis models have become popular because they can well trade-off between data utility and privacy. However, recent research has shown that generative models for image data are susceptible to the membership inference attack, which can determine whether a given record was used to train a victim synthesis model. In this paper, we investigate the membership inference attack in the context of tabular data synthesis. We conduct experiments on 4 state-of-the-art tabular data synthesis models under two attack scenarios (i.e., one black-box and one white-box attack), and find that the membership inference attack can seriously jeopardize these models. We next conduct experiments to evaluate how well two popular differentially-private deep learning training algorithms, DP-SGD and DP-GAN, can protect the models against the attack. Our key finding is that both algorithms can largely alleviate this threat by sacrificing the generation quality.
翻译:图表数据通常包含私人的重要信息;因此,在与他人共享之前必须采取预防措施;虽然已经提出了防止信息泄漏的几种方法(例如,有差别的隐私和k-匿名),但近年来由于数据综合模型在数据效用和隐私之间可以很好地取舍,表格数据综合模型已经变得流行;然而,最近的研究表明,图像数据的基因化模型很容易受到成员推论攻击的影响,可以确定某一记录是否用于培训受害者综合模型;在本文件中,我们调查了在表格数据综合中的成员推论攻击。我们在两种攻击情景(即,一个黑盒和一个白箱攻击)下对4个最先进的表格数据综合模型进行了实验,发现成员推论攻击可能严重危害这些模型。我们接下来要进行实验,评估两种流行的私人深层次差异培训算法(DP-SGD和DP-GAN)能够保护模型不受攻击。我们的关键发现,两种算法都可以通过牺牲新一代的质量来大大减轻这一威胁。