This paper introduces stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC). SSAA offer new examples of sparse (or $L_0$) attacks for which only few methods have been proposed previously. These attacks are devised by exploiting a small-time expansion idea widely used for Markov processes. Experiments on small and large datasets (CIFAR-10 and ImageNet) illustrate several advantages of SSAA in comparison with the-state-of-the-art methods. For instance, in the untargeted case, our method called Voting Folded Gaussian Attack (VFGA) scales efficiently to ImageNet and achieves a significantly lower $L_0$ score than SparseFool (up to $\frac{2}{5}$) while being faster. Moreover, VFGA achieves better $L_0$ scores on ImageNet than Sparse-RS when both attacks are fully successful on a large number of samples.
翻译:本文介绍了简单、快速和纯粹以噪音为基础、有针对性和无针对性地攻击神经网络分类人员(NNC)的零星攻击(即$L_0美元)的新例子,以前曾提出过很少几个方法。这些攻击是利用在Markov工艺中广泛使用的小规模扩大概念设计的。对小型和大型数据集(CIFAR-10和图像网)的实验表明,SSAA与最先进的方法相比,具有若干优势。例如,在非目标的情况下,我们称为投票法式高山攻击(VFGA)的标定方法,能有效地达到图像网络(VFGA)的稀有(或$L_0美元)攻击,而且比SparseFool(高达$\frac{%2 ⁇ 5美元)的得分要低得多。此外,VFGA在图像网络上比Sparse-RS的得分比0.0美元,因为两次攻击在大量样品上都完全成功。