Residual minimization is a widely used technique for solving Partial Differential Equations in variational form. It minimizes the dual norm of the residual, which naturally yields a saddle-point (min-max) problem over the so-called trial and test spaces. Such min-max problem is highly non-linear, and traditional methods often employ different mixed formulations to approximate it. Alternatively, it is possible to address the above saddle-point problem by employing Adversarial Neural Networks: one network approximates the global trial minimum, while another network seeks the test maximizer. However, this approach is numerically unstable due to a lack of continuity of the text maximizers with respect to the trial functions as we approach the exact solution. To overcome this, we reformulate the residual minimization as an equivalent minimization of a Ritz functional fed by optimal test functions computed from another Ritz functional minimization. The resulting Deep Double Ritz Method combines two Neural Networks for approximating the trial and optimal test functions. Numerical results on several 1D diffusion and convection problems support the robustness of our method up to the approximability and trainability capacity of the networks and the optimizer.
翻译:将残留物减到最小是解决部分差异以不同形式出现的一种广泛使用的方法,它尽量减少了剩余物的双重规范,这自然在所谓的试验和试验空间中产生一个马鞍点(最小最大)问题。这种微轴问题高度非线性,传统方法往往使用不同的混合配方来接近它。或者,有可能通过采用反向神经网络来解决上述马鞍点问题:一个网络接近全球试验最低量,另一个网络寻求试验最大化。然而,由于在我们接近确切的解决办法时,在试验功能方面文本缺乏连续性,这一方法在数字上不稳定。为了克服这一点,我们重新确定残余物最小化,以等量地最大限度地减少因从另一种Ritz功能最小化计算的最佳试验功能而获得的Ritz功能。由此产生的深重双重力方法将两个神经网络结合起来,以适应试验和最佳测试功能。几个1D的传播和配置问题的数字结果支持我们方法的稳健性,直至达到最佳网络和训练能力。