Residual minimization is a widely used technique for solving Partial Differential Equations in variational form. It minimizes the dual norm of the residual, which naturally yields a saddle-point (min-max) problem over the so-called trial and test spaces. In the context of neural networks, we can address this min-max approach by employing one network to seek the trial minimum, while another network seeks the test maximizers. However, the resulting method is numerically unstable as we approach the trial solution. To overcome this, we reformulate the residual minimization as an equivalent minimization of a Ritz functional fed by optimal test functions computed from another Ritz functional minimization. We call the resulting scheme the Deep Double Ritz Method (D$^2$RM), which combines two neural networks for approximating trial functions and optimal test functions along a nested double Ritz minimization strategy. Numerical results on different diffusion and convection problems support the robustness of our method, up to the approximation properties of the networks and the training capacity of the optimizers.
翻译:在神经网络方面,我们可以使用一个网络来寻求最低试验量,而另一个网络则寻求试验量最大化;然而,由此产生的方法在数字上是不稳定的,因为我们接近试验解决办法。要克服这一点,我们重新配置残余最小化,将残余功能最小化作为以另一种功能最小化计算的最佳测试功能所补充的Ritz最小化。我们称之为深双重Ritz方法(D$2$RM),将两种匹配试验功能的神经网络和最佳测试功能与嵌套的双重最小化战略结合起来。不同传播和对流问题的数字结果支持我们方法的稳健性,直至网络的近似性能和优化者的培训能力。