Minimizing PDE-residual losses is a common strategy to promote physical consistency in neural operators. However, standard formulations often lack variational correctness, meaning that small residuals do not guarantee small solution errors due to the use of non-compliant norms or ad hoc penalty terms for boundary conditions. This work develops a variationally correct operator learning framework by constructing first-order system least-squares (FOSLS) objectives whose values are provably equivalent to the solution error in PDE-induced norms. We demonstrate this framework on stationary diffusion and linear elasticity, incorporating mixed Dirichlet-Neumann boundary conditions via variational lifts to preserve norm equivalence without inconsistent penalties. To ensure the function space conformity required by the FOSLS loss, we propose a Reduced Basis Neural Operator (RBNO). The RBNO predicts coefficients for a pre-computed, conforming reduced basis, thereby ensuring variational stability by design while enabling efficient training. We provide a rigorous convergence analysis that bounds the total error by the sum of finite element discretization bias, reduced basis truncation error, neural network approximation error, and statistical estimation errors arising from finite sampling and optimization. Numerical benchmarks validate these theoretical bounds and demonstrate that the proposed approach achieves superior accuracy in PDE-compliant norms compared to standard baselines, while the residual loss serves as a reliable, computable a posteriori error estimator.
翻译:最小化偏微分方程残差损失是提升神经算子物理一致性的常用策略。然而,标准形式通常缺乏变分正确性,这意味着由于使用非协调范数或对边界条件采用临时惩罚项,小残差并不能保证解误差小。本文通过构建一阶系统最小二乘目标,发展了一种变分正确的算子学习框架,其值在偏微分方程诱导范数下被证明等价于解误差。我们在稳态扩散和线弹性问题上验证该框架,通过变分提升融入混合狄利克雷-诺伊曼边界条件,以保持范数等价性而无需非协调惩罚项。为确保FOSLS损失所要求的函数空间协调性,我们提出了一种降基神经算子。该算子通过预测预计算协调降基的系数,从而在设计中确保变分稳定性,同时实现高效训练。我们提供了严格的收敛性分析,将总误差界分解为有限元离散偏差、降基截断误差、神经网络逼近误差以及有限采样与优化产生的统计估计误差之和。数值基准测试验证了这些理论界,并证明所提方法在偏微分方程协调范数下相比标准基线达到更高精度,同时残差损失可作为可靠、可计算的后验误差估计量。