In many research fields in artificial intelligence, it has been shown that deep neural networks are useful to estimate unknown functions on high dimensional input spaces. However, their generalization performance is not yet completely clarified from the theoretical point of view because they are nonidentifiable and singular learning machines. Moreover, a ReLU function is not differentiable, to which algebraic or analytic methods in singular learning theory cannot be applied. In this paper, we study a deep ReLU neural network in overparametrized cases and prove that the Bayesian free energy, which is equal to the minus log marginal likelihoodor the Bayesian stochastic complexity, is bounded even if the number of layers are larger than necessary to estimate an unknown data-generating function. Since the Bayesian generalization error is equal to the increase of the free energy as a function of a sample size, our result also shows that the Bayesian generalization error does not increase even if a deep ReLU neural network is designed to be sufficiently large or in an opeverparametrized state.
翻译:在人工智能许多研究领域中,已经证明深度神经网络可以用于估计高维输入空间中的未知函数。然而,它们的泛化性能尚未从理论角度完全阐明,因为它们是不可识别和奇异的学习机器。此外,ReLU函数不可微分,因此奇异学习理论中的代数或解析方法无法应用。在本文中,我们研究了过度参数化状态下的深度ReLU神经网络,并证明了即使层数大于估计未知数据生成函数所需的层数,贝叶斯自由能,即等于边缘似然的负对数或贝叶斯随机复杂度,也是有界的。由于贝叶斯泛化误差等于自由能作为样本大小函数的增加,我们的结果还表明,即使设计一个足够大的深度ReLU神经网络或处于过度参数化状态,贝叶斯泛化误差也不会增加。