We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev space $W^s(L_q(\Omega))$ on a bounded domain $\Omega$, where the error is measured in $L_p(\Omega)$. This problem is important for studying the application of neural networks in scientific computing and has previously been solved only in the case $p=q=\infty$. Our contribution is to provide a solution for all $1\leq p,q\leq \infty$ and $s > 0$. Our results show that deep ReLU networks significantly outperform classical methods of approximation, but that this comes at the cost of parameters which are not encodable.
翻译:我们研究的是,从参数数量来看,具有RELU激活功能的深神经网络如何高效地能够将Sobolev空间的功能约合于美元(L_q(\Omega))和美元(美元),其间误差以美元(p)(\Omega)计量。这个问题对于研究神经网络在科学计算中的应用十分重要,以前只解决了美元(q)和美元(美元)。我们的贡献是为所有$\leq p,q\leq\leq\infty$和$ > 0美元(美元)提供一个解决方案。我们的结果显示,深ReLU网络明显地超越了传统的近似方法,但这是以无法辨认的参数为代价的。