Adversarial training is a widely used strategy for making neural networks resistant to adversarial perturbations. For a neural network of width $m$, $n$ input training data in $d$ dimension, it takes $\Omega(mnd)$ time cost per training iteration for the forward and backward computation. In this paper we analyze the convergence guarantee of adversarial training procedure on a two-layer neural network with shifted ReLU activation, and shows that only $o(m)$ neurons will be activated for each input data per iteration. Furthermore, we develop an algorithm for adversarial training with time cost $o(m n d)$ per iteration by applying half-space reporting data structure.
翻译:反向培训是一种广泛使用的战略,使神经网络抵御对抗性扰动。对于一个宽度为百万美元的神经网络,投入培训数据以美元为单位,前向和后向计算每次培训迭代需要花费$\Omega(mn)美元的时间成本。在本文中,我们分析了两层神经网络对抗性培训程序的趋同保证,同时调整了RELU的激活功能,并表明每次迭代输入数据将只激活$(m)美元神经元。此外,我们通过应用半空报告数据结构,为对抗性培训制定了一个时间成本为$(m)美元(mnd)美元(每迭代)的算法。