In this paper, we propose a novel generative model-based attack on learnable image encryption methods proposed for privacy-preserving deep learning. Various learnable encryption methods have been studied to protect the sensitive visual information of plain images, and some of them have been investigated to be robust enough against all existing attacks. However, previous attacks on image encryption focus only on traditional cryptanalytic attacks or reverse translation models, so these attacks cannot recover any visual information if a block-scrambling encryption step, which effectively destroys global information, is applied. Accordingly, in this paper, generative models are explored to evaluate whether such models can restore sensitive visual information from encrypted images for the first time. We first point out that encrypted images have some similarity with plain images in the embedding space. By taking advantage of leaked information from encrypted images, we propose a guided generative model as an attack on learnable image encryption to recover personally identifiable visual information. We implement the proposed attack in two ways by utilizing two state-of-the-art generative models: a StyleGAN-based model and latent diffusion-based one. Experiments were carried out on the CelebA-HQ and ImageNet datasets. Results show that images reconstructed by the proposed method have perceptual similarities to plain images.
翻译:在本文中,我们建议对为保护隐私而深层学习而建议的可学习图像加密方法进行新型的基因模型攻击。已经研究了各种可学习的加密方法,以保护普通图像的敏感视觉信息。已经研究过各种可学习的加密方法,其中一些方法已经调查过,足以抵御所有现有的攻击。然而,以前对图像加密的攻击只侧重于传统的加密攻击或反向翻译模型,因此如果应用了块状拼凑加密步骤,有效地摧毁了全球信息,这些攻击就无法恢复任何视觉信息。因此,本文探讨的基因模型是为了评估这些模型是否能首次从加密图像中恢复敏感的视觉信息。我们首先指出,加密图像与嵌入空间中的普通图像有一些相似之处。我们利用加密图像的泄露信息,提出一个经过指导的基因模型,作为对可学习的图像加密攻击,以恢复可自我识别的视觉信息。我们通过两种方式实施拟议的攻击:一种基于StyGAN的模型和基于潜伏性扩散的模型。我们先行。我们首先指出,加密图像与嵌入空间中的图像有一些相似性图像的实验。我们建议通过CelebA-HQ图像的复制式图像来进行现场重建。</s>