The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces. We formulate the neural operator as a composition of linear integral operators and nonlinear activation functions. We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator. The proposed neural operators are also discretization-invariant, i.e., they share the same model parameters among different discretization of the underlying function spaces. Furthermore, we introduce four classes of efficient parameterization, viz., graph neural operators, multi-pole graph neural operators, low-rank neural operators, and Fourier neural operators. An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations (PDEs). We consider standard PDEs such as the Burgers, Darcy subsurface flow, and the Navier-Stokes equations, and show that the proposed neural operators have superior performance compared to existing machine learning based methodologies, while being several orders of magnitude faster than conventional PDE solvers.
翻译:传统上,神经网络的发展主要集中于学习有限维欧几里得空间或有限集合之间的映射。我们提出了神经网络的一般化,用于学习算子,称为神经算子,它映射无限维函数空间之间的映射。我们将神经算子定义为线性积分算子和非线性激活函数的组合。我们证明了我们提出的神经算子的普遍逼近定理,表明它可以逼近任何给定的非线性连续算子。所提出的神经算子还具有离散不变性,即它们在基础函数空间的不同离散化之间共享相同的模型参数。此外,我们引入了四类有效的参数化方法,即图神经算子、多极图神经算子、低秩神经算子和Fourier神经算子。神经算子的一个重要应用是学习偏微分方程(PDEs)的解算子的代理映射。我们考虑标准的PDEs,如Burgers方程、Darcy地下流动方程和Navier-Stokes方程,并展示了所提出的神经算子相对于现有的基于机器学习的方法具有卓越的性能,同时比传统的PDE求解器快几个数量级。