Randomized neural networks (RNN) are a variation of neural networks in which the hidden-layer parameters are fixed to randomly assigned values and the output-layer parameters are obtained by solving a linear system by least squares. This improves the efficiency without degrading the accuracy of the neural network. In this paper, we combine the idea of the local RNN (LRNN) and the discontinuous Galerkin (DG) approach for solving partial differential equations. RNNs are used to approximate the solution on the subdomains, and the DG formulation is used to glue them together. Taking the Poisson problem as a model, we propose three numerical schemes and provide the convergence analyses. Then we extend the ideas to time-dependent problems. Taking the heat equation as a model, three space-time LRNN with DG formulations are proposed. Finally, we present numerical tests to demonstrate the performance of the methods developed herein. We compare the proposed methods with the finite element method and the usual DG method. The LRNN-DG methods can achieve better accuracy under the same degrees of freedom, signifying that this new approach has a great potential for solving partial differential equations.
翻译:随机神经网络( RNNN) 是神经网络的变异, 隐藏层参数被固定在随机分配值上, 而输出层参数则通过用最小方块解决线性系统而获得。 这样提高了效率, 同时又不降低神经网络的准确性 。 在本文中, 我们结合了本地 RNN (LNN) 和不连续的 Galerkin (DG) 的理念来解决局部差异方程式。 我们使用这些神经网络来比较子域的解决方案, 并使用DG 配方来把它们粘合在一起。 我们以 Poisson 问题为模型, 我们提出三个数字方案, 并提供趋同分析。 然后, 我们将这些想法扩展到有时间性的问题。 以热方程式为模型, 3个空时 LNNNN 和 DG 配方程式为模型。 最后, 我们提出数字测试来证明此处开发的方法的性能。 我们比较了拟议方法与有限元素方法和通常的 DG 方法。 LNND 方法可以在相同的自由度下实现更好的准确性, 表示这一新方法具有解决部分差异方方方方程式的巨大潜力 。