At present, deep learning based methods are being employed to resolve the computational challenges of high-dimensional partial differential equations (PDEs). But the computation of the high order derivatives of neural networks is costly, and high order derivatives lack robustness for training purposes. We propose a novel approach to solving PDEs with high order derivatives by simultaneously approximating the function value and derivatives. We introduce intermediate variables to rewrite the PDEs into a system of low order differential equations as what is done in the local discontinuous Galerkin method. The intermediate variables and the solutions to the PDEs are simultaneously approximated by a multi-output deep neural network. By taking the residual of the system as a loss function, we can optimize the network parameters to approximate the solution. The whole process relies on low order derivatives. Numerous numerical examples are carried out to demonstrate that our local deep learning is efficient, robust, flexible, and is particularly well-suited for high-dimensional PDEs with high order derivatives.
翻译:目前,正在采用基于深层次学习的方法来解决高维部分差异方程式(PDEs)的计算挑战。但是,计算神经网络高排序衍生物的成本很高,而高排序衍生物在培训方面缺乏稳健性。我们提出一种新的方法,通过同时接近功能值和衍生物,解决高排序衍生物的PDE。我们引入了中间变量,将PDEs改写成一个低排序差异方程式系统,就像本地不连续的Galerkin方法所做的那样。中间变量和PDEs的解决方案同时被一个多输出深度神经网络所近似。通过将系统剩余部分作为损失功能,我们可以优化网络参数,以接近解决方案。整个过程依靠低排序衍生物。我们开展了许多数字实例,以证明我们本地的深层次学习效率高、稳健、灵活,而且特别适合高端衍生物的高度PDEs。