The susceptibility of deep learning models to adversarial perturbations has stirred renewed attention in adversarial examples resulting in a number of attacks. However, most of these attacks fail to encompass a large spectrum of adversarial perturbations that are imperceptible to humans. In this paper, we present localized uncertainty attacks, a novel class of threat models against deterministic and stochastic classifiers. Under this threat model, we create adversarial examples by perturbing only regions in the inputs where a classifier is uncertain. To find such regions, we utilize the predictive uncertainty of the classifier when the classifier is stochastic or, we learn a surrogate model to amortize the uncertainty when it is deterministic. Unlike $\ell_p$ ball or functional attacks which perturb inputs indiscriminately, our targeted changes can be less perceptible. When considered under our threat model, these attacks still produce strong adversarial examples; with the examples retaining a greater degree of similarity with the inputs.
翻译:深层次的学习模式容易受到对抗性干扰,这在导致一些攻击的敌对例子中引起新的关注。然而,这些攻击大多没有包括人类无法察觉的大量对抗性扰动。在本文中,我们介绍了局部性不确定攻击,这是针对确定性和随机分类者的一种新型威胁模式。在这种威胁模式下,我们仅通过在某一分类者不确定的投入中扰动某个区域来创建对抗性例子。为了找到这类区域,我们利用分类者预测的不确定性,或者我们学习了一种代用模型,以在确定性时对不确定性进行摊合。不像美元\ell_p$球或功能性攻击那样,我们的目标变化会不加区分地干扰。在我们的威胁模式下考虑时,这些攻击仍然产生强烈的对抗性实例;为了找到这类区域,我们利用了分类者预测的不确定性,我们学习了一种代用模型,在确定性时对不确定性进行摊合。与美元/p$/p$或球或功能性攻击不同,我们的目标变化可能不那么容易被察觉。在我们的威胁模式下考虑时,这些攻击仍然产生强烈的对抗性实例;这些例子与投入保持更大程度的相似性。