In many research fields in artificial intelligence, it has been shown that deep neural networks are useful to estimate unknown functions on high dimensional input spaces. However, their generalization performance is not yet completely clarified from the theoretical point of view because they are nonidentifiable and singular learning machines. Moreover, a ReLU function is not differentiable, to which algebraic or analytic methods in singular learning theory cannot be applied. In this paper, we study a deep ReLU neural network in overparametrized cases and prove that the Bayesian free energy, which is equal to the minus log marginal likelihoodor the Bayesian stochastic complexity, is bounded even if the number of layers are larger than necessary to estimate an unknown data-generating function. Since the Bayesian generalization error is equal to the increase of the free energy as a function of a sample size, our result also shows that the Bayesian generalization error does not increase even if a deep ReLU neural network is designed to be sufficiently large or in an opeverparametrized state.
翻译:在许多人工智能研究领域中,已经证明深层神经网络在高维输入空间中估计未知函数非常有用。但是,从理论角度来看,它们的泛化性能尚未完全澄清,因为它们是不可识别的和奇异的学习机器。此外,ReLU函数是不可微的,这使得奇异学习理论中的代数或分析方法无法应用。在本文中,我们研究了一种过参数化情况下的深层ReLU神经网络,并证明了它的贝叶斯自由能(即负对数边缘似然或贝叶斯随机复杂度)是有界的,即使层数大于估计未知数据生成函数所需的最少数量。由于贝叶斯泛化误差等于自由能随样本量的增加,我们的研究结果还表明,即使设计深层ReLU神经网络足够庞大或过参数化,贝叶斯泛化误差也不会增加。