In this paper, we perform the convergence analysis of unsupervised Legendre--Galerkin neural networks (ULGNet), a deep-learning-based numerical method for solving partial differential equations (PDEs). Unlike existing deep learning-based numerical methods for PDEs, the ULGNet expresses the solution as a spectral expansion with respect to the Legendre basis and predicts the coefficients with deep neural networks by solving a variational residual minimization problem. Since the corresponding loss function is equivalent to the residual induced by the linear algebraic system depending on the choice of basis functions, we prove that the minimizer of the discrete loss function converges to the weak solution of the PDEs. Numerical evidence will also be provided to support the theoretical result. Key technical tools include the variant of the universal approximation theorem for bounded neural networks, the analysis of the stiffness and mass matrices, and the uniform law of large numbers in terms of the Rademacher complexity.
翻译:在本文中,我们对未受监督的传说-伽勒金神经网络(ULGNet)进行了趋同分析,这是解决部分差异方程式(PDEs)的一种基于深层学习的数值方法。 与现有的PDEs基于深层学习的数值方法不同,ULGNet将解决方案作为光谱扩展法来表达,并通过解决一个变异性残余最小化问题来预测与深神经网络的系数。由于相应的损失功能相当于线性代数系统根据基本功能的选择而引发的残余,我们证明离散损失功能的最小化器会与PDEs的薄弱解决方案汇合。还将提供数字证据来支持理论结果。关键技术工具包括约束神经网络的通用近似理论变量、对坚硬度和质量矩阵的分析,以及Rademacher复杂度中大数的统一法。