Residual minimization is a widely used technique for solving Partial Differential Equations in variational form. It minimizes the dual norm of the residual, which naturally yields a saddle-point (min-max) problem over the so-called trial and test spaces. In the context of neural networks, we can address this min-max approach by employing one network to seek the trial minimum, while another network seeks the test maximizers. However, the resulting method is numerically unstable as we approach the trial solution. To overcome this, we reformulate the residual minimization as an equivalent minimization of a Ritz functional fed by optimal test functions computed from another Ritz functional minimization. We call the resulting scheme the Deep Double Ritz Method (D$^2$RM), which combines two neural networks for approximating trial functions and optimal test functions along a nested double Ritz minimization strategy. Numerical results on several 1D diffusion and convection problems support the robustness of our method, up to the approximation properties of the networks and the training capacity of the optimizers.
翻译:在神经网络方面,我们可以使用一个网络来寻求试验最低量,而另一个网络则寻求试验最大量;然而,由此产生的方法在数字上不稳定,我们正在接近试验解决办法;要克服这一点,我们重新将剩余最小度重新确定为以另一种Ritz功能最小化计算的最佳测试功能所补充的Ritz功能最小化。我们称之为“深双重Ritz方法”(D$2$RM),将两种匹配试验功能的神经网络和最佳测试功能与嵌套的双重Ritz最小化战略结合起来。几个1D的传播和对流问题的数字结果支持我们方法的稳健性,直至网络的近似性能和优化者的培训能力。