We present a neural network-based method for solving linear and nonlinear partial differential equations, by combining the ideas of extreme learning machines (ELM), domain decomposition and local neural networks. The field solution on each sub-domain is represented by a local feed-forward neural network, and $C^k$ continuity is imposed on the sub-domain boundaries. Each local neural network consists of a small number of hidden layers, while its last hidden layer can be wide. The weight/bias coefficients in all hidden layers of the local neural networks are pre-set to random values and are fixed, and only the weight coefficients in the output layers are training parameters. The overall neural network is trained by a linear or nonlinear least squares computation, not by the back-propagation type algorithms. We introduce a block time-marching scheme together with the presented method for long-time dynamic simulations. The current method exhibits a clear sense of convergence with respect to the degrees of freedom in the neural network. Its numerical errors typically decrease exponentially or nearly exponentially as the number of degrees of freedom increases. Extensive numerical experiments have been performed to demonstrate the computational performance of the presented method. We compare the current method with the deep Galerkin method (DGM) and the physics-informed neural network (PINN) in terms of the accuracy and computational cost. The current method exhibits a clear superiority, with its numerical errors and network training time considerably smaller (typically by orders of magnitude) than those of DGM and PINN. We also compare the current method with the classical finite element method (FEM). The computational performance of the current method is on par with, and oftentimes exceeds, the FEM performance.
翻译:我们通过结合极端学习机器(ELM)、域分解和地方神经网络的理念,提出了解决线性和非线性部分偏差方程式的神经网络方法。每个子域的实地解决方案由本地饲料向向神经网络代表,在子域边界上强制实施$Ck$的连续性。每个本地神经网络由少量的隐藏层组成,其最后隐藏层可以宽度。本地神经网络所有隐藏层的重量/比值系数预先设定为随机值,并且固定,而产出层中只有重量系数是培训参数。整个神经网络由线性或非线性最小平方计算来培训,而不是由反向调整型神经网络的算法来培训。我们引入了区间时间总体计划,同时提出了长期动态模拟的方法。当前方法显示与神经网络中自由度的趋同度的趋同度。其数值错误一般会随着当前正值平均自由度的直径比值而大幅下降或接近指数性能。我们通过直线性平面的网络方法来测量整个神经网络的性能的性能,我们用直径直径方法来进行。我们用直径直的计算方法来展示了当前正地计算。