This paper is concerned with a novel deep learning method for variational problems with essential boundary conditions. To this end, we first reformulate the original problem into a minimax problem corresponding to a feasible augmented Lagrangian, which can be solved by the augmented Lagrangian method in an infinite dimensional setting. Based on this, by expressing the primal and dual variables with two individual deep neural network functions, we present an augmented Lagrangian deep learning method for which the parameters are trained by the stochastic optimization method together with a projection technique. Compared to the traditional penalty method, the new method admits two main advantages: i) the choice of the penalty parameter is flexible and robust, and ii) the numerical solution is more accurate in the same magnitude of computational cost. As typical applications, we apply the new approach to solve elliptic problems and (nonlinear) eigenvalue problems with essential boundary conditions, and numerical experiments are presented to show the effectiveness of the new method.
翻译:本文关注的是针对基本边界条件的变异问题的新颖的深层次学习方法。 为此,我们首先将原始问题改写成一个小问题,与可行的扩大拉格朗加法相对应,在无限的维度环境中,可以通过增强拉格朗加法来解决。 在此基础上,我们通过表达具有两个单项深神经网络功能的原始变量和双重变量,展示了拉格朗加法的强化深层次学习方法,参数由随机优化法和投影技术加以培训。 与传统的惩罚方法相比,新方法承认了两个主要优势:(一) 刑罚参数的选择灵活有力,以及(二) 数字解决方案在计算成本的相同程度上更为准确。作为典型应用,我们采用了新办法解决椭圆性问题和(非线性)基本边界条件的精度问题,并进行了数字实验,以显示新方法的有效性。