Deep Neural Networks have been found vulnerable re-cently. A kind of well-designed inputs, which called adver-sarial examples, can lead the networks to make incorrectpredictions. Depending on the different scenarios, goalsand capabilities, the difficulties of the attacks are different.For example, a targeted attack is more difficult than a non-targeted attack, a universal attack is more difficult than anon-universal attack, a transferable attack is more difficultthan a nontransferable one. The question is: Is there existan attack that can meet all these requirements? In this pa-per, we answer this question by producing a kind of attacksunder these conditions. We learn a universal mapping tomap the sources to the adversarial examples. These exam-ples can fool classification networks to classify all of theminto one targeted class, and also have strong transferability.Our code is released at: xxxxx.
翻译:深神经网络最近被发现是脆弱的。 一种设计周全的投入,叫做相邻的例子,可以引导网络做出错误的防范。 取决于不同的情景、 目标和能力, 袭击的困难是不同的。 例如, 定点袭击比非定点袭击更困难, 普遍袭击比非定点袭击更困难, 可转移袭击比不可转让袭击更困难。 问题是 : 是否存在能够满足所有这些要求的攻击? 在这个Pa-per中, 我们通过在这些条件下制造某种攻击来回答这个问题。 我们学习了一种通用的绘图, 将源图绘制成对抗性例子。 这些评分员可以愚弄网络, 将所有最小类别划为一个目标类别, 并且具有很强的可转移性。 我们的代码在 XXXxxx 发布于 : XXXxxx 。