The purpose of this article is to develop machinery to study the capacity of deep neural networks (DNNs) to approximate high-dimensional functions. In particular, we show that DNNs have the expressive power to overcome the curse of dimensionality in the approximation of a large class of functions. More precisely, we prove that these functions can be approximated by DNNs on compact sets such that the number of parameters necessary to represent the approximating DNNs grows at most polynomially in the reciprocal $1/\varepsilon$ of the approximation accuracy $\varepsilon>0$ and in the input dimension $d\in \mathbb{N} =\{1,2,3,\dots\}$. To this end, we introduce certain approximation spaces, consisting of sequences of functions that can be efficiently approximated by DNNs. We then establish closure properties which we combine with known and new bounds on the number of parameters necessary to approximate locally Lipschitz continuous functions, maximum functions, and product functions by DNNs. The main result of this article demonstrates that DNNs have sufficient expressiveness to approximate certain sequences of functions which can be constructed by means of a finite number of compositions using locally Lipschitz continuous functions, maxima, and products without the curse of dimensionality.
翻译:本条的目的是开发一种机制,研究深神经网络(DNN)的能力,以近似高维功能。特别是,我们显示DNN具有明确的力量,可以克服大量功能近似于大类型功能时的维度诅咒。更准确地说,我们证明这些功能可以被紧凑组合中的DNNN所近似。这样,代表相近的DNN所需的参数数量就以对等1美元/毫瓦列普西隆(DNN)的近似精确度($\varepsilon>0美元)和输入层面($d\ in\mathbb{N} +1,2,3,\\\\\\\\\\\\\ ⁇ $$$。为此,我们引入了某些近似空间,由DNNN可以有效接近的功能序列构成。然后,我们建立封闭性特性,与已知的和新的参数数目相结合,以近似本地Lipschitz连续功能、最大功能和产品功能的参数数量。这主要显示DNNNNP有足够的明确度组成方式,可以不使用当地最精确的功能。