We study the computation complexity of deep ReLU (Rectified Linear Unit) neural networks for the approximation of functions from the H\"older-Zygmund space of mixed smoothness defined on the $d$-dimensional unit cube when the dimension $d$ may be very large. The approximation error is measured in the norm of isotropic Sobolev space. For every function $f$ from the H\"older-Zygmund space of mixed smoothness, we explicitly construct a deep ReLU neural network having an output that approximates $f$ with a prescribed accuracy $\varepsilon$, and prove tight dimension-dependent upper and lower bounds of the computation complexity of this approximation, characterized as the size and the depth of this deep ReLU neural network, explicitly in $d$ and $\varepsilon$. The proof of these results are in particular, relied on the approximation by sparse-grid sampling recovery based on the Faber series.
翻译:我们研究深ReLU(精细线性单元)神经网络的计算复杂性,以近似H\"older-Zygmund空间的功能,该空间在维度可能非常大的情况下,在美元单位立方体上定义的混合光滑度。近似误差在异热带Sobolev空间的规范中测量。对于H\"older-Zygmund空间的混合光滑度的每个函数,我们明确建造了一个深ReLU神经网络,其输出值约为$ff,且规定精确度为$\varepslon,并证明这一近似值的计算复杂性具有紧凑的维度上下界限,其特征是这个深ReLU神经网络的大小和深度,以美元和美元为明确值。这些结果的证据尤其取决于基于Faber系列的稀网取样回收的近光度。