We studied the least-squares ReLU neural network (LSNN) method for solving linear advection-reaction equation with discontinuous solution in [Cai, Zhiqiang, Jingshuang Chen, and Min Liu. ``Least-squares ReLU neural network (LSNN) method for linear advection-reaction equation.'' Journal of Computational Physics 443 (2021), 110514]. The method is based on a least-squares formulation and uses a new class of approximating functions: ReLU neural network (NN) functions. A critical and additional component of the LSNN method, differing from other NN-based methods, is the introduction of a properly designed and physics preserved discrete differential operator. In this paper, we study the LSNN method for problems with discontinuity interfaces. First, we show that ReLU NN functions with depth $\lceil \log_2(d+1)\rceil+1$ can approximate any $d$-dimensional step function on a discontinuity interface generated by a vector field as streamlines with any prescribed accuracy. By decomposing the solution into continuous and discontinuous parts, we prove theoretically that discretization error of the LSNN method using ReLU NN functions with depth $\lceil \log_2(d+1)\rceil+1$ is mainly determined by the continuous part of the solution provided that the solution jump is constant. Numerical results for both two- and three-dimensional test problems with various discontinuity interfaces show that the LSNN method with enough layers is accurate and does not exhibit the common Gibbs phenomena along discontinuity interfaces.
翻译:暂无翻译