There has been a significant progress in detecting out-of-distribution (OOD) inputs in neural networks recently, primarily due to the use of large models pretrained on large datasets, and an emerging use of multi-modality. We show a severe adversarial vulnerability of even the strongest current OOD detection techniques. With a small, targeted perturbation to the input pixels, we can change the image assignment from an in-distribution to an out-distribution, and vice versa, easily. In particular, we demonstrate severe adversarial vulnerability on the challenging near OOD CIFAR-100 vs CIFAR-10 task, as well as on the far OOD CIFAR-100 vs SVHN. We study the adversarial robustness of several post-processing techniques, including the simple baseline of Maximum of Softmax Probabilities (MSP), the Mahalanobis distance, and the newly proposed \textit{Relative} Mahalanobis distance. By comparing the loss of OOD detection performance at various perturbation strengths, we demonstrate the beneficial effect of using ensembles of OOD detectors, and the use of the \textit{Relative} Mahalanobis distance over other post-processing methods. In addition, we show that even strong zero-shot OOD detection using CLIP and multi-modality suffers from a severe lack of adversarial robustness as well. Our code is available at https://github.com/stanislavfort/adversaries_to_OOD_detection
翻译:最近,在发现神经网络的分流(OOD)投入方面取得了显著进展,这主要是因为使用在大型数据集上预先训练过的大型模型,以及正在使用多种模式。我们发现,即使是目前最强的OOD探测技术也存在严重的对抗性脆弱性。通过对输入像素进行小规模、有针对性的扰动,我们可以将图像分配从分流转为分流,反之亦然。特别是,在OOD CIFAR-100对CIFAR-10任务以及远的OOOD CIFAR-100对SVHN任务的挑战性方面,我们表现出严重的对抗性脆弱性。我们研究了几种后处理技术的对抗性强性强性,包括软体骨质腐蚀性最强的基线(MSP)、马哈拉诺比斯距离(Mahalanobisbis)和新提议的http://text{retal}Mahalanobis距离。通过比较OODD在各种分流强的分流力上丧失的检测性,我们展示了在ODAD-LDRexerval adtory adoration_Orations