This work introduces a paradigm for constructing parametric neural operators that are derived from finite-dimensional representations of Green's operators, with learnable Green's functions, for linear partial differential equations (PDEs). We refer to such neural operators as Neural Green's Operators (NGOs). Our construction of NGOs preserves the linear action of Green's operators on the inhomogeneity fields, while approximating the nonlinear dependence of the Green's function on the coefficients of the PDE using neural networks that take weighted averages of such coefficients as input. This construction reduces the complexity of the problem from learning the entire solution operator and its dependence on all parameters to only learning the Green's function and its dependence on the PDE coefficients. Moreover, taking weighted averages, rather than point samples, of input functions decouples the network size from the number of sampling points, enabling efficient resolution of multiple scales in the input fields. Furthermore, we show that our explicit representation of Green's functions enables the embedding of desirable mathematical attributes in our NGO architectures, such as symmetry, spectral, and conservation properties. Through numerical benchmarks on canonical PDEs, we demonstrate that NGOs achieve comparable or superior accuracy to deep operator networks, variationally mimetic operator networks, and Fourier neural operators with similar parameter counts, while generalizing significantly better when tested on out-of-distribution data. For time-dependent PDEs, we show that NGOs can produce pointwise-accurate dynamics in an auto-regressive manner when trained on a single time step. Finally, we show that we can leverage the explicit representation of Green's functions returned by NGOs to construct effective matrix preconditioners that accelerate iterative solvers for PDEs.
翻译:暂无翻译