Deep Neural Networks (DNNs) have been shown vulnerable to adversarial (Test-Time Evasion (TTE)) attacks which, by making small changes to the input, alter the DNN's decision. We propose an attack detector based on class-conditional Generative Adversarial Networks (GANs). We model the distribution of clean data conditioned on the predicted class label by an Auxiliary Classifier GAN (ACGAN). Given a test sample and its predicted class, three detection statistics are calculated using the ACGAN Generator and Discriminator. Experiments on image classification datasets under different TTE attack methods show that our method outperforms state-of-the-art detection methods. We also investigate the effectiveness of anomaly detection using different DNN layers (input features or internal-layer features) and demonstrate that anomalies are harder to detect using features closer to the DNN's output layer.
翻译:深神经网络(DNN)被证明很容易受到对抗性(Test-Time Evasion (TTE))攻击(TTE)攻击,这些攻击通过对输入稍作改动而改变了DNN的决定。我们提议以等级条件生成反反向网络(GANs)为基础建立一个攻击探测器。我们用辅助分类器GAN(ACGAN)以预测等级标签为条件,对清洁数据的分布进行模拟。根据测试样本及其预测等级,使用ACGAN生成器和分解器计算出三个探测统计数据。在不同的TTE攻击方法下对图像分类数据集的实验表明,我们的方法优于最先进的探测方法。我们还用不同的DNNN层(输入特征或内层特征)调查异常检测的有效性,并表明使用靠近DNN输出层的特征难以检测异常。