Neural operators are gaining attention in computational science and engineering. PCA-Net is a recently proposed neural operator architecture which combines principal component analysis (PCA) with neural networks to approximate an underlying operator. The present work develops approximation theory for this approach, improving and significantly extending previous work in this direction. In terms of qualitative bounds, this paper derives a novel universal approximation result, under minimal assumptions on the underlying operator and the data-generating distribution. In terms of quantitative bounds, two potential obstacles to efficient operator learning with PCA-Net are identified, and made rigorous through the derivation of lower complexity bounds; the first relates to the complexity of the output distribution, measured by a slow decay of the PCA eigenvalues. The other obstacle relates the inherent complexity of the space of operators between infinite-dimensional input and output spaces, resulting in a rigorous and quantifiable statement of the curse of dimensionality. In addition to these lower bounds, upper complexity bounds are derived; first, a suitable smoothness criterion is shown to ensure a algebraic decay of the PCA eigenvalues. Then, it is shown that PCA-Net can overcome the general curse of dimensionality for specific operators of interest, arising from the Darcy flow and Navier-Stokes equations.
翻译:神经算子在计算科学和工程学中越来越受到关注。PCA-Net 是一种最近被提出的神经算子架构,它将主成分分析(PCA)与神经网络相结合,以逼近一个底层算子。本文针对该方法开发了逼近理论, 对此前的工作进行了改进和显著的扩展。从定性上讲,本文在对底层算子和数据生成分布作最少假设的情况下,推导出了一种新的通用逼近结果。从定量上看,本文确定了 PCA-Net 有效算子学习的两个潜在障碍,并通过推导下界使其变得明确;第一个障碍涉及输出分布的复杂性,通过 PCA 特征值的缓慢衰减来衡量。另一个障碍与在无穷维度输入和输出空间之间的算子空间的内在复杂度有关,导致对维度诅咒进行了一项严格而可量化的陈述。除了这些下界之外,上界复杂度也被确定;首先确定了一种适当的平滑性标准以保证 PCA 特征值代数衰减。然后通过展示 PCA-Net 可以克服特定算子相关的维度诅咒,包括达西流和 Navier-Stokes 方程。