Normalizing flows are prominent deep generative models that provide tractable probability distributions and efficient density estimation. However, they are well known to fail while detecting Out-of-Distribution (OOD) inputs as they directly encode the local features of the input representations in their latent space. In this paper, we solve this overconfidence issue of normalizing flows by demonstrating that flows, if extended by an attention mechanism, can reliably detect outliers including adversarial attacks. Our approach does not require outlier data for training and we showcase the efficiency of our method for OOD detection by reporting state-of-the-art performance in diverse experimental settings. Code available at https://github.com/ComputationalRadiationPhysics/InFlow .
翻译:标准化流动是突出的深层基因模型,提供了可移植概率分布和高效率的密度估计,但众所周知,在发现外传播(OOOD)输入时,这些输入在直接编码其潜在空间内输入表示的当地特征时是失败的。在本文件中,我们通过证明流动如果通过关注机制加以扩展,可以可靠地探测出包括对抗性攻击在内的外源,来解决流动正常化的过度自信问题。我们的方法不需要外部数据进行培训,我们通过报告不同实验环境中的最新性能来展示我们OOD探测方法的效率。代码见https://github.com/ComputationalRadiationPhysics/InFlow。