We consider the phenomenon of adversarial examples in ReLU networks with independent gaussian parameters. For networks of constant depth and with a large range of widths (for instance, it suffices if the width of each layer is polynomial in that of any other layer), small perturbations of input vectors lead to large changes of outputs. This generalizes results of Daniely and Schacham (2020) for networks of rapidly decreasing width and of Bubeck et al (2021) for two-layer networks. The proof shows that adversarial examples arise in these networks because the functions that they compute are very close to linear. Bottleneck layers in the network play a key role: the minimal width up to some point in the network determines scales and sensitivities of mappings computed up to that point. The main result is for networks with constant depth, but we also show that some constraint on depth is necessary for a result of this kind, because there are suitably deep networks that, with constant probability, compute a function that is close to constant.
翻译:我们考虑的是RELU网络中具有独立粗略参数的对抗性实例现象。 对于连续深度网络和宽度广泛的网络(例如,如果每个层的宽度在任何其他层中是多元的,就足够了),输入矢量的小扰动导致产出的巨变。这概括了Daniely和Schacham(202020年)对迅速缩小宽度网络的结果,Bubeck等人(2021年)对双层网络的结果。证据表明,这些网络中出现了对抗性实例,因为它们所计算的功能非常接近线性。网络中的瓶颈层发挥着关键作用:网络中某个点的最小宽度决定了到该点的绘图的尺度和敏感度。主要结果针对的是持续深度的网络,但我们也表明,对于这类网络的结果,有必要对深度进行某种限制,因为有相当深的网络,随时可能计算出一种接近常态的功能。