The random feature method (RFM), a mesh-free machine learning-based framework, has emerged as a promising alternative for solving PDEs on complex domains. However, for large three-dimensional nonlinear problems, attaining high accuracy typically requires domain partitioning with many collocation points and random features per subdomain, which leads to extremely large and ill-conditioned nonlinear least-squares systems. To overcome these challenges, we propose two randomized Newton-type solvers. The first is an inexact Newton method with right preconditioning (IPN), in which randomized Jacobian compression and QR factorization are used to construct an efficient preconditioner that substantially reduces the condition number. Each Newton step is then approximately solved by LSQR, and a derivative-free line search is incorporated to ensure residual reduction and stable convergence. Building upon this framework, we further develop an adaptive multi-step inexact preconditioned Newton method (AMIPN). In this approach, the preconditioned Jacobian is reused across multiple inner iterations, while a prescribed maximum number of inner iterations together with an adaptive early-stopping criterion determines whether the current preconditioner can be retained in subsequent outer iterations. These mechanisms effectively avoid redundant computations and enhance robustness. Extensive numerical experiments on both three-dimensional steady-state and two-dimensional time-dependent PDEs with complex geometries confirm the remarkable effectiveness of the proposed solvers. Compared with classical discretization techniques and recent machine-learning-based approaches, the methods consistently deliver substantial accuracy improvements and robust convergence, thereby establishing the RFM combined with IPN/AMIPN as an efficient framework for large-scale nonlinear PDEs. .
翻译:暂无翻译