A key challenge facing deep learning is that neural networks are often not robust to shifts in the underlying data distribution. We study this problem from the perspective of the statistical concept of parameter identification. Generalization bounds from learning theory often assume that the test distribution is close to the training distribution. In contrast, if we can identify the "true" parameters, then the model generalizes to arbitrary distribution shifts. However, neural networks typically have internal symmetries that make parameter identification impossible. We show that we can identify the function represented by a quadratic network even though we cannot identify its parameters; we extend this result to neural networks with ReLU activations. Thus, we can obtain robust generalization bounds for neural networks. We leverage this result to obtain new bounds for contextual bandits and transfer learning with quadratic neural networks. Overall, our results suggest that we can improve robustness of neural networks by designing models that can represent the true data generating process.
翻译:深层学习面临的一个关键挑战是神经网络往往不健全,无法改变基本数据分布。我们从参数识别的统计概念的角度来研究这一问题。学习理论的概括性界限往往假定测试分布接近培训分布。相反,如果我们能够确定“真实”参数,那么模型就会概括为任意分布变化。然而,神经网络通常具有内部对称性,使得参数识别无法进行。我们显示,我们可以识别四面网络所代表的功能,尽管我们无法确定其参数;我们将这一结果推广到神经网络,并激活RELU。因此,我们可以获得神经网络的强力概括性界限。我们利用这一结果为背景土匪获取新的界限,并将学习转移到四面神经网络。总体而言,我们的结果表明,我们可以通过设计能够代表真实数据生成过程的模型来改进神经网络的稳健性。