An important propertyfor deep neural networks to possess is the ability to perform robust out of distribution detection (OOD) on previously unseen data. This property is essential for safety purposes when deploying models for real world applications. Recent studies show that probabilistic generative models can perform poorly on this task, which is surprising given that they seek to estimate the likelihood of training data. To alleviate this issue, we propose the exponentially tilted Gaussian prior distribution for the Variational Autoencoder (VAE). With this prior, we are able to achieve state-of-the art results using just the negative log likelihood that the VAE naturally assigns, while being orders of magnitude faster than some competitive methods. We also show that our model produces high quality image samples which are more crisp than that of a standard Gaussian VAE. The new prior distribution has a very simple implementation which uses a Kullback Leibler divergence that compares the difference between a latent vector's length, and the radius of a sphere.
翻译:对于深神经网络而言,一个重要属性是能够对先前的不见数据进行可靠分布检测(OOOD),这种属性对于安全目的在部署真实世界应用模型时至关重要。最近的研究显示,概率型基因模型在这项工作上表现不佳,这是令人惊讶的,因为它们试图估计培训数据的可能性。为了缓解这一问题,我们提议对变异自动编码器(VAE)进行指数式倾斜的先前分布。此前,我们仅利用VAE自然分配的负日志可能性就能取得最新结果,而VAE自然分配的日志概率比某些竞争性方法要快。我们还显示,我们的模型生成的高质量图像样本比标准高斯VAE更精确。新的先前分布有一个非常简单的执行方法,即使用 Kullback Leeper 差异来比较潜向矢量长度和范围半径之间的差异。