In this paper, we introduce a quasi-Newton method optimized for efficiently solving quasi-linear elliptic equations and systems, with a specific focus on GPU-based computation. By approximating the Jacobian matrix with a combination of linear Laplacian and simplified nonlinear terms, our method reduces the computational overhead typical of traditional Newton methods while handling the large, sparse matrices generated from discretized PDEs. We also provide a convergence analysis demonstrating local convergence to the exact solution under optimal choices for the regularization parameter, ensuring stability and efficiency in each iteration. Numerical experiments in two- and three-dimensional domains validate the proposed method's robustness and computational gains with tensor-product implementation. This approach offers a promising pathway for accelerating quasi-linear elliptic equation and system solvers, expanding the feasibility of complex simulations in physics, engineering, and other fields leveraging advanced hardware capabilities.
翻译:暂无翻译