The purpose of the present paper is to study the computation complexity of deep ReLU neural networks to approximate functions in H\"older-Nikol'skii spaces of mixed smoothness $H_\infty^\alpha(\mathbb{I}^d)$ on the unit cube $\mathbb{I}^d:=[0,1]^d$. In this context, for any function $f\in H_\infty^\alpha(\mathbb{I}^d)$, we explicitly construct nonadaptive and adaptive deep ReLU neural networks having an output that approximates $f$ with a prescribed accuracy $\varepsilon$, and prove dimension-dependent bounds for the computation complexity of this approximation, characterized by the size and the depth of this deep ReLU neural network, explicitly in $d$ and $\varepsilon$. Our results show the advantage of the adaptive method of approximation by deep ReLU neural networks over nonadaptive one.
翻译:本文的目的是研究深ReLU神经网络的复杂程度,以近似 H\"older-Nikol'skii 神经网络的功能,在单位立方体上混合顺畅的光滑空间为H ⁇ ffty ⁇ alpha(\mathbb{I ⁇ d):=[0,1 ⁇ d]美元。在此情况下,对于任何功能,我们明确建造了非适应性和适应性的深ReLU神经网络,其产出以规定的精确度约合$,并证明这种近似的计算复杂性取决于尺寸的界限,其特征是这个深ReLU神经网络的大小和深度,以美元和瓦雷普西隆美元为明确值。我们的结果显示深ReLU神经网络对非适应性网络的适应性方法的优势。