Deep Neural Networks (DNN) are known to be vulnerable to adversarial samples, the detection of which is crucial for the wide application of these DNN models. Recently, a number of deep testing methods in software engineering were proposed to find the vulnerability of DNN systems, and one of them, i.e., Model Mutation Testing (MMT), was used to successfully detect various adversarial samples generated by different kinds of adversarial attacks. However, the mutated models in MMT are always huge in number (e.g., over 100 models) and lack diversity (e.g., can be easily circumvented by high-confidence adversarial samples), which makes it less efficient in real applications and less effective in detecting high-confidence adversarial samples. In this study, we propose Graph-Guided Testing (GGT) for adversarial sample detection to overcome these aforementioned challenges. GGT generates pruned models with the guide of graph characteristics, each of them has only about 5% parameters of the mutated model in MMT, and graph guided models have higher diversity. The experiments on CIFAR10 and SVHN validate that GGT performs much better than MMT with respect to both effectiveness and efficiency.
翻译:众所周知,深心神经网络(DNN)很容易受到对抗性样本的伤害,而发现这些样本对于广泛应用这些DNN模型至关重要。最近,提议了一些软件工程的深深测试方法,以发现DNN系统的脆弱性。最近,提议在软件工程中采用若干深深测试方法,以发现DN系统的脆弱性,其中之一,即模型突变测试(MMMT),以成功探测不同种类对抗性攻击产生的各种对抗性样本。然而,MMMT的突变模型在数量上总是巨大(例如100多个模型),而且缺乏多样性(例如,高信任性对抗性对立性样本很容易绕过),因此在实际应用中效率较低,在探测高信心对抗性对立样品方面效力较低。在这项研究中,我们提议用图形指导测试(GGGGT)来检测上述挑战。GGGGT生成了带有图表特征指南的经调整的模型,其中每一种模型只有MT变异模型的5%的参数,而图表指导模型则具有更高的多样性。CFAR10和SVHN的实验证实GGGGT比M效率要好得多。