In this article we propose a new deep learning approach to solve parametric partial differential equations (PDEs) approximately. In particular, we introduce a new strategy to design specific artificial neural network (ANN) architectures in conjunction with specific ANN initialization schemes which are tailor-made for the particular scientific computing approximation problem under consideration. In the proposed approach we combine efficient classical numerical approximation techniques such as higher-order Runge-Kutta schemes with sophisticated deep (operator) learning methodologies such as the recently introduced Fourier neural operators (FNOs). Specifically, we introduce customized adaptions of existing standard ANN architectures together with specialized initializations for these ANN architectures so that at initialization we have that the ANNs closely mimic a chosen efficient classical numerical algorithm for the considered approximation problem. The obtained ANN architectures and their initialization schemes are thus strongly inspired by numerical algorithms as well as by popular deep learning methodologies from the literature and in that sense we refer to the introduced ANNs in conjunction with their tailor-made initialization schemes as Algorithmically Designed Artificial Neural Networks (ADANNs). We numerically test the proposed ADANN approach in the case of some parametric PDEs. In the tested numerical examples the ADANN approach significantly outperforms existing traditional approximation algorithms as well as existing deep learning methodologies from the literature.
翻译:在本条中,我们提出了一个新的深层次学习方法,以大致解决部分偏差方程(PDEs),特别是,我们引入了一种新的战略,设计具体的人工神经网络(ANN)结构,结合具体的ANN初始化计划设计具体的人工神经网络(ANN)结构,这些结构是针对审议中的特定科学计算近似问题量身定制的。在拟议的方法中,我们将高阶龙格-库塔计划等高效的经典数字近似技术与诸如最近引进的Fourier神经操作员(Forier 神经操作员)等先进的深层次(操作员)学习方法结合起来。具体地说,我们引入了现有ANNA标准结构的定制适应性调整,同时对这些ANN结构进行了专门初始化,以便在初始化时,我们让ANNN(ANN)结构密切模仿了为近似近似问题所选择的高效的经典传统数字算法。因此,获得的ANNNA结构及其初始化计划受到数字算法的强烈启发,以及文献和感官深层次的学习方法,我们把引进的ANNS(ADANDAN)的定制初始化计划作为AKAS级的现有大量测试方法。