In this paper, we analyze the number of neurons and training parameters that a neural networks needs to approximate multivariate functions of bounded second mixed derivatives -- Korobov functions. We prove upper bounds on these quantities for shallow and deep neural networks, breaking the curse of dimensionality. Our bounds hold for general activation functions, including ReLU. We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Korobov functions, showing that neural networks are near-optimal function approximators.
翻译:在本文中,我们分析神经网络需要多少神经元和培训参数来近似封闭的第二种混合衍生物 -- -- Korobov 函数的多变功能。我们证明浅层和深层神经网络的这些数量具有上限,打破了维度的诅咒。我们的界限是一般激活功能,包括RELU。我们进一步证明这些界限几乎符合任何连续功能近似 Korobov 函数的最小参数数量,表明神经网络是接近最优化的功能近似功能。