In an unsupervised attack on variational autoencoders (VAEs), an adversary finds a small perturbation in an input sample that significantly changes its latent space encoding, thereby compromising the reconstruction for a fixed decoder. A known reason for such vulnerability is the distortions in the latent space resulting from a mismatch between approximated latent posterior and a prior distribution. Consequently, a slight change in an input sample can move its encoding to a low/zero density region in the latent space resulting in an unconstrained generation. This paper demonstrates that an optimal way for an adversary to attack VAEs is to exploit a directional bias of a stochastic pullback metric tensor induced by the encoder and decoder networks. The pullback metric tensor of an encoder measures the change in infinitesimal latent volume from an input to a latent space. Thus, it can be viewed as a lens to analyse the effect of input perturbations leading to latent space distortions. We propose robustness evaluation scores using the eigenspectrum of a pullback metric tensor. Moreover, we empirically show that the scores correlate with the robustness parameter $\beta$ of the $\beta-$VAE. Since increasing $\beta$ also degrades reconstruction quality, we demonstrate a simple alternative using \textit{mixup} training to fill the empty regions in the latent space, thus improving robustness with improved reconstruction.
翻译:在对变分自编码器(VAE)进行无监督攻击时,攻击者找到了一个小扰动,可以显著改变其潜在空间编码,从而损坏了一个固定解码器的重构性能。这种脆弱性的已知原因是近似的潜在后验分布与先验分布之间的不匹配导致了潜在空间的扭曲,因此输入样本的轻微变化可以将其编码移动到潜在空间中的低/零密度区域,导致无约束的生成。本文证明了在攻击VAEs时,攻击者的最佳策略是利用由编码器和解码器网络引起的随机上拉度量张量的方向偏差。编码器的上拉度量张量测量了从输入到潜在空间的微小潜在体积的改变。因此,它可以被视为分析由输入扰动导致的潜在空间扭曲效应的透镜。我们提出了使用上拉度量张量的特征谱的鲁棒性评估分数。此外,我们通过实验证明,这些分数与$\beta-$VAE的鲁棒性参数$\beta$相关。由于增加$\beta$也会降低重构质量,我们展示了一种简单的替代方案,即\textit{mixup}训练,以填补潜在空间中的空白区域,从而提高重构和鲁棒性。