Fourier neural operators (FNOs) have recently been proposed as an effective framework for learning operators that map between infinite-dimensional spaces. We prove that FNOs are universal, in the sense that they can approximate any continuous operator to desired accuracy. Moreover, we suggest a mechanism by which FNOs can approximate operators associated with PDEs efficiently. Explicit error bounds are derived to show that the size of the FNO, approximating operators associated with a Darcy type elliptic PDE and with the incompressible Navier-Stokes equations of fluid dynamics, only increases sub (log)-linearly in terms of the reciprocal of the error. Thus, FNOs are shown to efficiently approximate operators arising in a large class of PDEs.
翻译:最近有人提议将四神经操作器(Fourier 神经操作器)作为在无限空间之间绘制图的学习操作器的有效框架。 我们证明FNOs是普遍性的,因为它们可以将任何连续操作器的精确度大致接近所希望的精确度。 此外,我们建议一种机制,使FNOs能够有效地接近与PDEs有关的操作器。我们推断出明确的错误界限,以显示FNO的大小,与Darcy 型椭圆式 PDE和不可压缩的流体动态导航-斯托克斯方程式相近的操作器的尺寸,只能以错误的对等方式增加子(log)线性。因此,FNOs显示在大型 PDEs中有效地接近操作器。