Approximating functions of a large number of variables poses particular challenges often subsumed under the term "Curse of Dimensionality". Unless the approximated function exhibits a very high level of smoothness the Curse can be avoided only by exploiting some typically hidden {\em structural sparsity}. In this paper we propose a general framework for model classes of functions in high dimensions based on suitable notions of {\em compositional sparsity} quantifying approximability by highly nonlinear expressions such as deep neural networks. The relevance of these concepts are demonstrated for {\em solution manifolds} of parametric transport equations which are known not to enjoy the type of high order regularity of parameter-to-solution maps that help to avoid the Curse of Dimenbsionality in other model scenarios. Compositional sparsity is shown to serve as the key mechanism for proving that sparsity of problem data is inherited in a quantifiable way by the solution manifold. In particular, one obtains convergence rates for deep neural network realizations showing that the Curse of Dimensionality is indeed avoided.
翻译:大量变量的近似功能构成特别的挑战,通常都包含在“尺寸的诅咒”这一术语之下。除非大致功能显示高度的平滑度,否则诅咒只能通过利用某些通常隐藏的(em 结构孔径 ) 来避免。在本文件中,我们提议了一个基于适当概念的高维功能模型类别的一般框架,该模型类别基于“them 构成宽度”的恰当概念,通过高度非线性表达式(如深神经网络)来量化问题数据的近似性。这些概念对于已知不享有参数到溶解图高度规律性类型的参数到溶解的参数等式具有相关性,而该图有助于避免其他模型情景中的分解性。组成宽度被证明是证明问题数据的宽度是由溶液的多元以可量化的方式继承的关键机制。特别是,对于深神经网络实现的汇合率,表明确实避免了多维度的临界值。