We studied the least-squares ReLU neural network method (LSNN) for solving linear advection-reaction equation with discontinuous solution in [Cai, Zhiqiang, Jingshuang Chen, and Min Liu. "Least-squares ReLU neural network (LSNN) method for linear advection-reaction equation." Journal of Computational Physics 443 (2021): 110514]. The method is based on the least-squares formulation and employs a new class of approximating functions: multilayer perceptrons with the rectified linear unit (ReLU) activation function, i.e., ReLU deep neural networks (DNNs). In this paper, we first show that ReLU DNN with depth $\lceil \log_2(d+1)\rceil+1$ can approximate any $d$-dimensional step function on arbitrary discontinuous interfaces with any prescribed accuracy. By decomposing the solution into continuous and discontinuous parts, we prove theoretically that discretization error of the LSNN method using DNN with depth $\lceil \log_2(d+1)\rceil+1$ is mainly determined by the continuous part of the solution provided that the solution jump is constant. Numerical results for both two and three dimensional problems with various discontinuous interfaces show that the LSNN method with enough layers is accurate and does not exhibit the common Gibbs phenomena along interfaces.
翻译:我们研究了最小方程 ReLU 神经网络方法(LSNN), 以解决线性对冲反应方程式, 且在 [Cai, Zhiqiang, Jingshang Chen, 和Min Liu, “Least-squares ReLU 神经网络(LSNNN), 用于线性对冲反应方程式。” 《计算物理杂志》443(2021): 110514) 。 这种方法基于最小方程配方配方, 并使用一种新的对称功能: 由纠正线性单位(RELU) 界面启动功能的多层接收器( RELU), 即, ReLU 深线性神经网络(DNNNS) 。 在本文中, 我们首先显示, 深度为 $lcel_ d1 的ReLNNU DNU 和 连续性解算法, 以连续性解算法 两次, 提供了持续性解算法的连续解算法 。