The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces. We formulate the neural operator as a composition of linear integral operators and nonlinear activation functions. We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator. The proposed neural operators are also discretization-invariant, i.e., they share the same model parameters among different discretization of the underlying function spaces. Furthermore, we introduce four classes of efficient parameterization, viz., graph neural operators, multi-pole graph neural operators, low-rank neural operators, and Fourier neural operators. An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations (PDEs). We consider standard PDEs such as the Burgers, Darcy subsurface flow, and the Navier-Stokes equations, and show that the proposed neural operators have superior performance compared to existing machine learning based methodologies, while being several orders of magnitude faster than conventional PDE solvers.
翻译:神经网络的古典发展主要侧重于学习有限维度的 Euclidean 空间或有限功能之间的绘图。 我们提议对神经网络进行一般化的神经网络进行学习,以学习操作者, 称为神经操作者, 也就是无限维功能空间之间的神经操作者; 我们将神经操作者设计成线性整体操作者和非线性激活功能的构成。 我们证明我们拟议的神经操作者有一个通用近似理论, 表明它可以接近任何非线性连续操作者。 拟议的神经操作者也是离散性异性操作者, 也就是说, 它们在基本功能空间的不同离散性之间共享相同的模型参数。 此外, 我们引入了四种高效参数化的类别, 比如: 图形神经操作者、 多极形图形神经操作者、 低音调神经操作者和 4级神经操作者。 我们对神经操作者的一个重要应用是为部分差异方程式( PDEs) 的溶液操作者学习超导图。 我们考虑标准 PDEs, 例如布尔格斯, 达西地表下流, 和纳维- Stokes 等方程式等等方程式等等等等 。 此外, 显示, 我们提议的机械操作者具有较快的机械操作者学习方法的高级性, 而以较之高级级级的操作者具有较高级性能性能性。