Transfer learning for partial differential equations (PDEs) is to develop a pre-trained neural network that can be used to solve a wide class of PDEs. Existing transfer learning approaches require much information of the target PDEs such as its formulation and/or data of its solution for pre-training. In this work, we propose to construct transferable neural feature spaces from purely function approximation perspectives without using PDE information. The construction of the feature space involves re-parameterization of the hidden neurons and uses auxiliary functions to tune the resulting feature space. Theoretical analysis shows the high quality of the produced feature space, i.e., uniformly distributed neurons. Extensive numerical experiments verify the outstanding performance of our method, including significantly improved transferability, e.g., using the same feature space for various PDEs with different domains and boundary conditions, and the superior accuracy, e.g., several orders of magnitude smaller mean squared error than the state of the art methods.
翻译:部分差异方程式(PDEs)的转移学习是开发一个预先培训的神经网络,可用来解决一大批PDE。现有的转移学习方法需要目标PDE的许多信息,例如其拟订和/或培训前解决办法的数据。在这项工作中,我们提议从纯功能近似角度,不使用PDE信息,建立可转让神经特征空间。地物空间的建造涉及隐藏神经元的重新校准,并使用辅助功能来调节由此产生的特征空间。理论分析显示所生成的特征空间的质量很高,即统一分布的神经元。广泛的数字实验核查我们方法的杰出性能,包括显著改进可转移性,例如,对不同领域和边界条件的不同PDE使用相同的特性空间,以及高精度,例如,比艺术方法的状态要小得多的量级平均方形错误。