In recent years, neural networks have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance. Recent studies have shown that they are all vulnerable to the attack of adversarial examples. Small and often imperceptible perturbations to the input images are sufficient to fool the most powerful neural networks. \emph{Advbox} is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle, PyTorch, Caffe2, MxNet, Keras, TensorFlow and it can benchmark the robustness of machine learning models. Compared to previous work, our platform supports black box attacks on Machine-Learning-as-a-service, as well as more attack scenarios, such as Face Recognition Attack, Stealth T-shirt, and Deepfake Face Detect. The code is licensed under the Apache 2.0 license and is openly available at https://github.com/advboxes/AdvBox.
翻译:近年来,神经网络被广泛用于计算机视觉任务,特别是视觉分类问题,据报告,在这些问题上,新的算法能够达到甚至超过人类的性能;最近的研究显示,它们都容易受到对抗性实例的攻击。对输入图像的小规模而且往往无法察觉的干扰足以愚弄最强大的神经网络。\emph{Advbox}是一个工具箱,可以产生对抗性例子,说明在PaddlePaddddle、PyToch、Cafe2、MxNet、Keras、TensorFlow的愚昧神经网络可以衡量机器学习模型的稳健性。与以往的工作相比,我们的平台支持黑箱攻击机器学习服务,以及更多的攻击情景,如面对面承认攻击、偷窃T恤衫和Deepfakeface Face探测器。该代码根据阿帕奇2.0许可证获得许可,可在https://github.com/advboxes/Advbox公开查阅。