Deep Neural Networks (DNNs) are everywhere, frequently performing a fairly complex task that used to be unimaginable for machines to carry out. In doing so, they do a lot of decision making which, depending on the application, may be disastrous if gone wrong. This necessitates a formal argument that the underlying neural networks satisfy certain desirable properties. Robustness is one such key property for DNNs, particularly if they are being deployed in safety or business critical applications. Informally speaking, a DNN is not robust if very small changes to its input may affect the output in a considerable way (e.g. changes the classification for that input). The task of finding an adversarial example is to demonstrate this lack of robustness, whenever applicable. While this is doable with the help of constrained optimization techniques, scalability becomes a challenge due to large-sized networks. This paper proposes the use of information gathered by preprocessing the DNN to heavily simplify the optimization problem. Our experiments substantiate that this is effective, and does significantly better than the state-of-the-art.
翻译:深神经网络(DNN)遍布各地,经常执行过去机器难以想象的相当复杂的任务。 在这样做的过程中,他们做了许多决策,如果应用出错,可能灾难性地发生。 这就需要正式论证基础神经网络满足某些可取的属性。 强性是DNN的关键属性之一, 特别是当它们被部署在安全或商业关键应用程序中时。 非正式地说, DNN如果其输入的微小变化可能对输出产生相当大的影响( 例如, 改变该输入的分类), 那么DNN就不健全。 找到一个对抗性的例子是为了证明这种缺乏稳健性, 只要适用。 虽然在限制优化技术的帮助下, 缩放性会成为大型网络的挑战。 本文建议使用预先处理 DNN 所收集的信息来大大简化优化问题。 我们的实验证实, 这样做是有效的, 并且比状态要好得多 。