In this paper, we focus on approximating a natural class of functions that are compositions of smooth functions. Unlike the low-dimensional support assumption on the covariate, we demonstrate that composition functions have an intrinsic sparse structure if we assume each layer in the composition has a small degree of freedom. This fact can alleviate the curse of dimensionality in approximation errors by neural networks. Specifically, by using mathematical induction and the multivariate Faa di Bruno formula, we extend the approximation theory of deep neural networks to the composition functions case. Furthermore, combining recent results on the statistical error of deep learning, we provide a general convergence rate analysis for the PINNs method in solving elliptic equations with compositional solutions. We also present two simple illustrative numerical examples to demonstrate the effect of the intrinsic sparse structure in regression and solving PDEs.
翻译:本文主要关注一种函数类型的近似,即由平滑函数组合而成的复合函数。与自变量的低维度支持假设不同,我们证明如果假设组合中的每个层具有较小的自由度,则组合函数具有固有的稀疏结构。这一事实可以减轻神经网络中的维度灾难误差。具体而言,通过使用数学归纳和多元 Faa di Bruno 公式,我们扩展了深度神经网络在近似复合函数方面的逼近理论。此外,我们结合深度学习的统计误差的最新结果,为 PINNs 方法在求解具有复合解的椭圆型方程中提供了一般的收敛速度分析。我们还提供了两个简单的数值示例,以演示内在稀疏结构在回归和求解 PDE 时的效果。