It is well known that Newton's method, especially when applied to large problems such as the discretization of nonlinear partial differential equations (PDEs), can have trouble converging if the initial guess is too far from the solution. This work focuses on accelerating this convergence, in the context of the discretization of nonlinear elliptic PDEs. We first provide a quick review of existing methods, and justify our choice of learning an initial guess with a Fourier neural operator (FNO). This choice was motivated by the mesh-independence of such operators, whose training and evaluation can be performed on grids with different resolutions. The FNO is trained using a loss minimization over generated data, loss functions based on the PDE discretization. Numerical results, in one and two dimensions, show that the proposed initial guess accelerates the convergence of Newton's method by a large margin compared to a naive initial guess, especially for highly nonlinear or anisotropic problems.
翻译:暂无翻译