Deep learning has become an integral part of various computer vision systems in recent years due to its outstanding achievements for object recognition, facial recognition, and scene understanding. However, deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary. In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications. To address this phenomenon, we present, what to our knowledge, is the first ever image set based adversarial defence approach. Image set classification has shown an exceptional performance for object and face recognition, owing to its intrinsic property of handling appearance variability. We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks. We extensively experiment the performance of the proposed technique with several voting strategies. We further analyse the effects of image size, perturbation magnitude, along with the ratio of perturbed images in each image set. We also evaluate our technique with the recent state-of-the-art defence methods, and single-shot recognition task. The empirical results demonstrate superior performance on CIFAR-10, MNIST, ETH-80, and Tiny ImageNet datasets.
翻译:近些年来,深层次的学习已成为各种计算机视觉系统的一个组成部分。深层的学习由于在物体识别、面部识别和场景理解方面取得的杰出成就而成为了各种计算机视觉系统的组成部分。然而,深层的神经网络(DNNS)很容易被对手以几乎高度的自信蒙骗。在实践中,深层的学习系统对被仔细扰动的图像的脆弱性,即所谓的对抗性实例,在物理世界应用中构成了严峻的安全威胁。为了应对这一现象,我们所知道的是有史以来第一个基于图像的对抗防御方法。图像集的分类显示了物体和面部识别的出色性能,因为它具有处理外观变异的内在特性。我们提议将一个强大的深深巴伊西亚图像集为防御框架,作为抵御广泛的对抗性攻击的防御框架。我们用若干投票战略广泛试验拟议技术的性能。我们进一步分析图像大小、扰动程度以及每套图像中受扰动图像的比例。我们还利用最近的状态防御方法和单张的识别任务来评估我们的技术。我们的经验结果显示CIFAR-10、MIS-80数据和单镜头。