Daniely and Schacham recently showed that gradient descent finds adversarial examples on random undercomplete two-layers ReLU neural networks. The term "undercomplete" refers to the fact that their proof only holds when the number of neurons is a vanishing fraction of the ambient dimension. We extend their result to the overcomplete case, where the number of neurons is larger than the dimension (yet also subexponential in the dimension). In fact we prove that a single step of gradient descent suffices. We also show this result for any subexponential width random neural network with smooth activation function.
翻译:Daniely 和 Schacham最近显示,梯度下降在随机的两层不完整 ReLU 神经网络中发现了对抗性实例。 “ 不完整”一词指的是,它们的证据只有在神经元数量是环境维度中消失的一小部分时才有效。 我们将其结果扩大到超完整的案例, 神经元数量大于该维度( 尚未在维度中还处于次要水平) 。 事实上, 我们证明, 梯度下降的一步就足够了。 我们还为任何具有平稳激活功能的亚扩展宽随机神经网络展示了这一结果 。