Model inversion (MI) attacks aim to infer or reconstruct the training dataset through reverse-engineering from the target model's weights. Recently, significant advancements in generative models have enabled MI attacks to overcome challenges in producing photo-realistic replicas of the training dataset, a technique known as generative MI. The generative MI primarily focuses on identifying latent vectors that correspond to specific target labels, leveraging a generative model trained with an auxiliary dataset. However, an important aspect is often overlooked: the MI attacks fail if the pre-trained generative model lacks the coverage to create an image corresponding to the target label, especially when there is a significant difference between the target and auxiliary datasets. To address this gap, we propose the Patch-MI method, inspired by a jigsaw puzzle, which offers a novel probabilistic interpretation of MI attacks. Even with a dissimilar auxiliary dataset, our method effectively creates images that closely mimic the distribution of image patches in the target dataset by patch-based reconstruction. Moreover, we numerically demonstrate that the Patch-MI improves Top 1 attack accuracy by 5\%p compared to existing methods.
翻译:暂无翻译