In this paper, we focus on approximating a natural class of functions that are compositions of smooth functions. Unlike the low-dimensional support assumption on the covariate, we demonstrate that composition functions have an intrinsic sparse structure if we assume each layer in the composition has a small degree of freedom. This fact can alleviate the curse of dimensionality in approximation errors by neural networks. Specifically, by using mathematical induction and the multivariate Faa di Bruno formula, we extend the approximation theory of deep neural networks to the composition functions case. Furthermore, combining recent results on the statistical error of deep learning, we provide a general convergence rate analysis for the PINNs method in solving elliptic equations with compositional solutions. We also present two simple illustrative numerical examples to demonstrate the effect of the intrinsic sparse structure in regression and solving PDEs.
翻译:本文侧重于逼近一类由光滑函数组成的复合函数。与自变量的低维支持假设不同,我们证明如果假设复合中的每一层都具有小的自由度,则复合函数具有本质上的稀疏结构。这一事实可以通过神经网络减缓维数灾难在逼近误差中的影响。具体而言,通过使用数学归纳和多元Faa di Bruno公式,我们将深度神经网络的逼近理论扩展到了复合函数案例。此外,结合最近有关深度学习统计误差的结果,我们提供了一个通用的收敛速度分析,用于解决具有组合解的椭圆方程的PINNs方法。我们还提供了两个简单的数值示例来演示内在稀疏结构在回归和解决PDEs中的效果。