By learning the mappings between infinite function spaces using carefully designed neural networks, the operator learning methodology has exhibited significantly more efficiency than traditional methods in solving complex problems such as differential equations, but faces concerns about their accuracy and reliability. To overcomes these limitations, combined with the structures of the spectral numerical method, a general neural architecture named spectral operator learning (SOL) is introduced, and one variant called the orthogonal polynomial neural operator (OPNO), developed for PDEs with Dirichlet, Neumann and Robin boundary conditions (BCs), is proposed later. The strict BC satisfaction properties and the universal approximation capacity of the OPNO are theoretically proven. A variety of numerical experiments with physical backgrounds show that the OPNO outperforms other existing deep learning methodologies, as well as the traditional 2nd-order finite difference method (FDM) with a considerably fine mesh (with the relative errors reaching the order of 1e-6), and is up to almost 5 magnitudes faster than the traditional method.
翻译:通过使用精心设计的神经网络在无限功能空间之间进行测绘,操作员学习方法在解决差异方程式等复杂问题方面比传统方法展示出比传统方法更有效率得多得多的效率,但面对其准确性和可靠性的关切。为了克服这些局限性,结合光谱数字方法的结构,引入了一个名为光谱操作员学习(SOL)的一般神经结构,还有一个变式,即为Drichlet、Neumann和Robin边界条件(BCs)的PDEs开发的光谱多级多级神经操作器(OPNO),后来提出。BC的严格的满意度特性和OPNO的通用近似能力得到了理论上的证明。各种具有物理背景的数字实验表明,OPNO优于其他现有的深层学习方法,以及传统的二阶定型差异法(FDM),其内置的大小相当精细(相对错误达到1e-6级),比传统方法快近5级。</s>