One of the key issues in the analysis of machine learning models is to identify the appropriate function space and norm for the model. This is the set of functions endowed with a quantity which can control the approximation and estimation errors by a particular machine learning model. In this paper, we address this issue for two representative neural network models: the two-layer networks and the residual neural networks. We define the Barron space and show that it is the right space for two-layer neural network models in the sense that optimal direct and inverse approximation theorems hold for functions in the Barron space. For residual neural network models, we construct the so-called flow-induced function space, and prove direct and inverse approximation theorems for this space. In addition, we show that the Rademacher complexity for bounded sets under these norms has the optimal upper bounds.
翻译:机器学习模型分析中的一个关键问题是确定该模型的适当功能空间和规范。 这是一组功能, 其数量可以控制特定机器学习模型的近似和估计误差。 在本文中, 我们讨论两个具有代表性的神经网络模型的问题: 两层网络和剩余神经网络。 我们定义了 Barron 空间, 并表明这是两层神经网络模型的正确空间, 其含义是, 最佳的直接和反近近光理论在Barron 空间的功能中占据着适当的空间。 对于剩余神经网络模型, 我们建造了所谓的流动引发的功能空间, 并证明了这一空间的直接和反向近光理论。 此外, 我们展示了这些规范下捆绑的Rademacher 复杂度具有最佳的上限 。