Adversarial attacks of neural network classifiers (NNC) and the use of random noises in these methods have stimulated a large number of works in recent years. However, despite all the previous investigations, existing approaches that rely on random noises to fool NNC have fallen far short of the-state-of-the-art adversarial methods performances. In this paper, we fill this gap by introducing stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of NNC. SSAA offer new examples of sparse (or $L_0$) attacks for which only few methods have been proposed previously. These attacks are devised by exploiting a small-time expansion idea widely used for Markov processes. Experiments on small and large datasets (CIFAR-10 and ImageNet) illustrate several advantages of SSAA in comparison with the-state-of-the-art methods. For instance, in the untargeted case, our method called voting folded Gaussian attack (VFGA) scales efficiently to ImageNet and achieves a significantly lower $L_0$ score than SparseFool (up to $\frac{1}{14}$ lower) while being faster. In the targeted setting, VFGA achives appealing results on ImageNet and is significantly much faster than Carlini-Wagner $L_0$ attack.
翻译:近些年来,尽管进行了所有调查,但目前依靠随机噪音来愚弄NNNC的做法远远没有达到最先进的对抗性方法的表现。在本文中,我们通过采用简单、快速和纯粹的对立性攻击(SSA)来填补这一差距,作为简单、快速和纯粹的针对噪音和不有针对性的NNNC攻击。SAA提供了新的稀有(或$_0$)攻击的例子,而此前只提出了很少几个方法。这些攻击是通过利用小时间扩张概念来设计,这些小噪音来愚弄NNNNC的,远远没有达到最先进的对抗性方法。在本文中,我们采用SSAA的优势是采用简单、少见的对抗性对抗性攻击(SSA),作为简单、快速和纯粹的对NNNCNC的有针对性和无目标的攻击。例如,我们的方法要求将GA(VFGA)更快速地折合到图像网络,在目标值为低的1美元上达到大大低于1美元的攻击率。