This paper studies the approximation capacity of ReLU neural networks with norm constraint on the weights. We prove upper and lower bounds on the approximation error of these networks for smooth function classes. The lower bound is derived through the Rademacher complexity of neural networks, which may be of independent interest. We apply these approximation bounds to analyze the convergence of regression using norm constrained neural networks and distribution estimation by GANs. In particular, we obtain convergence rates for over-parameterized neural networks. It is also shown that GANs can achieve optimal rate of learning probability distributions, when the discriminator is a properly chosen norm constrained neural network.
翻译:本文研究ReLU神经网络的近似能力,对重量进行规范限制。 我们证明这些网络的近似误差具有平滑功能级的上下限。 下限来自神经网络的雷德马赫复杂程度,这可能会引起独立的兴趣。 我们运用这些近似误差来分析回归的趋同程度, 使用常规约束神经网络和GANs的分布估计。 特别是, 我们获得了超分度神经网络的趋同率。 此外, 也表明当歧视者是被正确选择的规范约束神经网络时, GANs 能够实现最佳的学习概率分布率 。