There is tremendous potential in using neural networks to optimize numerical methods. In this paper, we introduce and analyse a framework for the neural optimization of discrete weak formulations, suitable for finite element methods. The main idea of the framework is to include a neural-network function acting as a control variable in the weak form. Finding the neural control that (quasi-) minimizes a suitable cost (or loss) functional, then yields a numerical approximation with desirable attributes. In particular, the framework allows in a natural way the incorporation of known data of the exact solution, or the incorporation of stabilization mechanisms (e.g., to remove spurious oscillations). The main result of our analysis pertains to the well-posedness and convergence of the associated constrained-optimization problem. In particular, we prove under certain conditions, that the discrete weak forms are stable, and that quasi-minimizing neural controls exist, which converge quasi-optimally. We specialize the analysis results to Galerkin, least-squares and minimal-residual formulations, where the neural-network dependence appears in the form of suitable weights. Elementary numerical experiments support our findings and demonstrate the potential of the framework.
翻译:在使用神经网络优化数字方法方面,存在着巨大的潜力。在本文件中,我们引入并分析一个适合有限元素方法的离散弱化配方神经优化框架,这个框架的主要想法是将神经网络功能作为薄弱形式的控制变量。找到神经控制(准-)最大限度地减少适当的成本(或损失)功能,然后产生具有理想属性的数值近似值。特别是,这个框架自然地将确切解决方案的已知数据纳入,或纳入稳定机制(例如,消除虚假的振荡)。我们分析的主要结果涉及相关限制-优化问题的妥善储存和趋同。特别是,我们证明在某些条件下,离散弱化形式是稳定的,并且存在准最小化的神经控制,这些控制是近似最接近的。我们专门将分析结果纳入Galerkin、最小的方位和最小的配方(例如,消除虚假的振荡)。我们分析的主要结果涉及相关的限制-优化问题。特别是,我们证明在某些条件下,离散的弱化形式是稳定的,并且存在准最小的神经控制,这些控制是准的。我们专门将分析结果与Galerkin、最差和最小的配方的配方的配制,在那里的网络依赖性将显示我们的适当重的实验。