In many research fields in artificial intelligence, it has been shown that deep neural networks are useful to estimate unknown functions on high dimensional input spaces. However, their generalization performance is not yet completely clarified from the theoretical point of view because they are nonidentifiable and singular learning machines. Moreover, a ReLU function is not differentiable, to which algebraic or analytic methods in singular learning theory cannot be applied. In this paper, we study a deep ReLU neural network in overparametrized cases and prove that the Bayesian free energy, which is equal to the minus log marginal likelihoodor the Bayesian stochastic complexity, is bounded even if the number of layers are larger than necessary to estimate an unknown data-generating function. Since the Bayesian generalization error is equal to the increase of the free energy as a function of a sample size, our result also shows that the Bayesian generalization error does not increase even if a deep ReLU neural network is designed to be sufficiently large or in an opeverparametrized state.
翻译:在人工智能的许多研究领域中,已经证明深度神经网络对于在高维输入空间上估计未知函数是有用的。然而,它们的泛化性能从理论上还不完全清楚,因为它们是不可区别且奇异的学习机。此外,ReLU 函数不可微分,因此无法应用奇异学习理论中的代数或解析方法。在本文中,我们研究了过度参数化情况下的深度 ReLU 神经网络,证明了贝叶斯自由能(等于边缘似然的负对数或贝叶斯随机复杂度)即使神经网络的层数大于估计未知数据生成函数所需的层数时,也是有界的。由于贝叶斯泛化误差等于自由能随样本大小的增加,我们的研究结果也表明,在深度 ReLU 神经网络的设计足够大或处于过度参数化状态时,贝叶斯泛化误差也不会增加。