PCA-Net is a recently proposed neural operator architecture which combines principal component analysis (PCA) with neural networks to approximate operators between infinite-dimensional function spaces. The present work develops approximation theory for this approach, improving and significantly extending previous work in this direction: First, a novel universal approximation result is derived, under minimal assumptions on the underlying operator and the data-generating distribution. Then, two potential obstacles to efficient operator learning with PCA-Net are identified, and made precise through lower complexity bounds; the first relates to the complexity of the output distribution, measured by a slow decay of the PCA eigenvalues. The other obstacle relates to the inherent complexity of the space of operators between infinite-dimensional input and output spaces, resulting in a rigorous and quantifiable statement of the curse of dimensionality. In addition to these lower bounds, upper complexity bounds are derived. A suitable smoothness criterion is shown to ensure an algebraic decay of the PCA eigenvalues. Furthermore, it is shown that PCA-Net can overcome the general curse of dimensionality for specific operators of interest, arising from the Darcy flow and the Navier-Stokes equations.
翻译:PCA-Net是一个最近提出的神经算子架构,它将主成分分析(PCA)与神经网络结合起来,以近似描述无限维函数空间之间的算子。本文针对此方法开展了逼近理论的研究,扩展和改进了先前的研究。首先,从最小假设下,推导出了新颖的万能逼近结果,这个结果适用于底层算子和数据生成分布。接着,鉴于PCA-Net算子学习的效率存在两个潜在障碍,我们进一步通过下界复杂度界定了这些障碍。第一个障碍涉及指导PCA特征值的慢速下降所体现的输出分布复杂性。另一个障碍涉及无限维输入和输出空间之间的算子空间的内在复杂度,由此产生关于维数诅咒的严谨且量化的论断。此外,本文还推导了上界复杂度界定。研究表明,平滑性准则适合实现PCA特征值的代数下降,并显示PCA-Net可以克服Darcy流和Navier-Stokes方程中涉及的特定算子的维数诅咒。