In this paper, we propose a data-driven framework for constructing efficient approximate inverse preconditioners for elliptic partial differential equations (PDEs) by learning the Green's function of the underlying operator with neural networks (NNs). The training process integrates four key components: an adaptive multiscale neural architecture ($\alpha$MSNN) that captures hierarchical features across near-, middle-, and far-field regimes; the use of coarse-grid anchor data to ensure physical identifiability; a multi-$\varepsilon$ staged training protocol that progressively refines the Green's function representation across spatial scales; and an overlapping domain decomposition that enables local adaptation while maintaining global consistency. Once trained, the NN-approximated Green's function is directly compressed into either a hierarchical ($\mathcal{H}$-) matrix or a sparse matrix-using only the mesh geometry and the network output. This geometric construction achieves nearly linear complexity in both setup and application while preserving the spectral properties essential for effective preconditioning. Numerical experiments on challenging elliptic PDEs demonstrate that the resulting preconditioners consistently yield fast convergence and small iteration counts.
翻译:暂无翻译