The problem of detecting the Out-of-Distribution (OoD) inputs is of paramount importance for Deep Neural Networks. It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable and often tend to make over-confident predictions for OoDs, assigning to them a higher density than to the in-distribution data. This over-confidence in a single model can be potentially mitigated with Bayesian inference over the model parameters that take into account epistemic uncertainty. This paper investigates three approaches to Bayesian inference: stochastic gradient Markov chain Monte Carlo, Bayes by Backpropagation, and Stochastic Weight Averaging-Gaussian. The inference is implemented over the weights of the deep neural networks that parameterize the likelihood of the Variational Autoencoder. We empirically evaluate the approaches against several benchmarks that are often used for OoD detection: estimation of the marginal likelihood utilizing sampled model ensemble, typicality test, disagreement score, and Watanabe-Akaike Information Criterion. Finally, we introduce two simple scores that demonstrate the state-of-the-art performance.
翻译:探测“外扩散”(OoD)输入的问题对于深海神经网络至关重要,以前曾表明,即使能够估计输入密度的深层生成模型可能并不可靠,而且往往对 OOD 作出过于自信的预测,给OOD 带来比分布中数据更高的密度。这种单一模型中的过度自信有可能通过巴耶斯推断模型参数而减轻,模型参数中考虑到特征不确定性。本文调查了巴伊西亚推论的三种方法:随机梯度梯度马尔科夫链蒙特卡洛、反偏偏偏偏偏、反偏偏偏偏偏偏偏偏、微微光电-Gaussian等。这种推论是针对深度神经网络的权重作出的,这些网络将挥发性自动电解器的可能性作为参数的参数。我们用经验来评估经常用于OOD探测的若干基准:利用抽样模型、典型测试、差异分数和Watanabe-Akaike,最后展示了我们两度的简单成绩。