Operator learning is a powerful paradigm for solving partial differential equations, with Fourier Neural Operators serving as a widely adopted foundation. However, FNOs face significant scalability challenges due to overparameterization and offer no native uncertainty quantification -- a key requirement for reliable scientific and engineering applications. Instead, neural operators rely on post hoc UQ methods that ignore geometric inductive biases. In this work, we introduce DINOZAUR: a diffusion-based neural operator parametrization with uncertainty quantification. Inspired by the structure of the heat kernel, DINOZAUR replaces the dense tensor multiplier in FNOs with a dimensionality-independent diffusion multiplier that has a single learnable time parameter per channel, drastically reducing parameter count and memory footprint without compromising predictive performance. By defining priors over those time parameters, we cast DINOZAUR as a Bayesian neural operator to yield spatially correlated outputs and calibrated uncertainty estimates. Our method achieves competitive or superior performance across several PDE benchmarks while providing efficient uncertainty quantification.
翻译:算子学习是求解偏微分方程的一种强大范式,其中傅里叶神经算子被广泛采用为基础框架。然而,由于过度参数化,FNO面临显著的可扩展性挑战,并且缺乏原生的不确定性量化功能——这是可靠科学与工程应用的关键要求。相反,神经算子依赖于忽略几何归纳偏置的事后UQ方法。在本工作中,我们提出了DINOZAUR:一种基于扩散的、具备不确定性量化功能的神经算子参数化方法。受热核结构的启发,DINOZAUR将FNO中的密集张量乘法器替换为一个与维度无关的扩散乘法器,该乘法器每个通道仅包含一个可学习的时间参数,从而在保持预测性能的同时,大幅减少了参数数量和内存占用。通过对这些时间参数定义先验分布,我们将DINOZAUR构建为贝叶斯神经算子,以产生空间相关的输出和经过校准的不确定性估计。我们的方法在多个PDE基准测试中取得了具有竞争力或更优的性能,同时提供了高效的不确定性量化。