This work studies the (non)robustness of two-layer neural networks in various high-dimensional linearized regimes. We establish fundamental trade-offs between memorization and robustness, as measured by the Sobolev-seminorm of the model w.r.t the data distribution, i.e the square root of the average squared $L_2$-norm of the gradients of the model w.r.t the its input. More precisely, if $n$ is the number of training examples, $d$ is the input dimension, and $k$ is the number of hidden neurons in a two-layer neural network, we prove for a large class of activation functions that, if the model memorizes even a fraction of the training, then its Sobolev-seminorm is lower-bounded by (i) $\sqrt{n}$ in case of infinite-width random features (RF) or neural tangent kernel (NTK) with $d \gtrsim n$; (ii) $\sqrt{n}$ in case of finite-width RF with proportionate scaling of $d$ and $k$; and (iii) $\sqrt{n/k}$ in case of finite-width NTK with proportionate scaling of $d$ and $k$. Moreover, all of these lower-bounds are tight: they are attained by the min-norm / least-squares interpolator (when $n$, $d$, and $k$ are in the appropriate interpolating regime). All our results hold as soon as data is log-concave isotropic, and there is label-noise, i.e the target variable is not a deterministic function of the data / features. We empirically validate our theoretical results with experiments. Accidentally, these experiments also reveal for the first time, (iv) a multiple-descent phenomenon in the robustness of the min-norm interpolator.
翻译:这项工作在高维线性系统中研究两层神经网络的( 不) 紫色 。 我们根据模型的Sobolev- seminnoorm w.r. t 数据分布, 即模型平均正方方块的平方根 $_ 2美元 r. t 输入。 更精确地说, 如果培训实例数为一美元, 输入量为美元, 输入量为美元, 而美元是两层神经网络中隐藏的神经数量。 我们证明, 大量的激活功能, 如果模型的reblev- seminnormmmmum w. twork- sermorm $2 rm, 也就是模型的正方块平方根 $@rr. 。 如果模型的正方块值平均正方块值为美元 rr.r. 美元, 那么, 以正方块平方块平方块的电量值( 美元) 和正方块值值值为美元, 以美元平方块平方平方块的正方位数据 。