Tabular data typically contains private and important information; thus, precautions must be taken before they are shared with others. Although several methods (e.g., differential privacy and k-anonymity) have been proposed to prevent information leakage, in recent years, tabular data synthesis models have become popular because they can well trade-off between data utility and privacy. However, recent research has shown that generative models for image data are susceptible to the membership inference attack, which can determine whether a given record was used to train a victim synthesis model. In this paper, we investigate the membership inference attack in the context of tabular data synthesis. We conduct experiments on 4 state-of-the-art tabular data synthesis models under two attack scenarios (i.e., one black-box and one white-box attack), and find that the membership inference attack can seriously jeopardize these models. We next conduct experiments to evaluate how well two popular differentially-private deep learning training algorithms, DP-SGD and DP-GAN, can protect the models against the attack. Our key finding is that both algorithms can largely alleviate this threat by sacrificing the generation quality. Code and data available at: https://github.com/JayoungKim408/MIA
翻译:图表数据通常包含私人和重要的信息;因此,在与他人共享信息之前必须采取预防措施;虽然提出了防止信息泄漏的几种方法(例如,不同的隐私和k-匿名),但近年来,由于数据综合模型在数据效用和隐私之间可以进行良好的交换,表格数据综合模型已经很受欢迎;然而,最近的研究表明,图像数据的基因模型很容易为会员推导攻击所使用,这种模型可以确定某一记录是否用于培训受害者综合模型;在本文中,我们调查在表格数据综合中的成员推论攻击;我们在两种攻击情景(即,一个黑箱和一个白箱攻击)下对4个最先进的表格数据综合模型进行实验,发现成员推论攻击可能严重危害这些模型。我们接下来进行实验,评估两种流行的私人深层培训算法,即DP-SGD和DP-GAN,能够保护这些模型免受攻击。我们的关键发现是,两种算法都可以通过牺牲新一代质量来大大减轻这种威胁。