Deep generative models have gained much attention given their ability to generate data for applications as varied as healthcare to financial technology to surveillance, and many more - the most popular models being generative adversarial networks and variational auto-encoders. Yet, as with all machine learning models, ever is the concern over security breaches and privacy leaks and deep generative models are no exception. These models have advanced so rapidly in recent years that work on their security is still in its infancy. In an attempt to audit the current and future threats against these models, and to provide a roadmap for defense preparations in the short term, we prepared this comprehensive and specialized survey on the security and privacy preservation of GANs and VAEs. Our focus is on the inner connection between attacks and model architectures and, more specifically, on five components of deep generative models: the training data, the latent code, the generators/decoders of GANs/ VAEs, the discriminators/encoders of GANs/ VAEs, and the generated data. For each model, component and attack, we review the current research progress and identify the key challenges. The paper concludes with a discussion of possible future attacks and research directions in the field.
翻译:深层基因模型由于能够生成数据以用于金融技术的保健、监视等各种应用而引起人们的极大关注,而更多的是最受欢迎的模型是基因对抗网络和变异自动编码器。然而,与所有机器学习模型一样,对安全漏洞和隐私泄漏以及深层基因模型的关切也一直没有例外。这些模型近年来进展如此迅速,其安全工作仍然处于初级阶段。为了对这些模型的当前和今后威胁进行审计,并为短期的防御准备工作提供一个路线图,我们编写了这份关于GANs和VAEs的安全和隐私保护的全面和专门调查。我们的重点是攻击和模型结构之间的内在联系,更具体地说,是深层基因模型的五个组成部分:培训数据、潜伏代码、GANs/VAEs的生成者/代碼、GANs/VAEs的导师/导师/导师,以及生成的数据。对于每一种模型、组成部分和攻击,我们审查了当前的研究进展,并确定了关键的挑战。文件最后对未来攻击进行了实地研究。