Kolmogorov-Arnold Networks have emerged as interpretable alternatives to traditional multi-layer perceptrons. However, standard implementations lack principled uncertainty quantification capabilities essential for many scientific applications. We present a framework integrating sparse variational Gaussian process inference with the Kolmogorov-Arnold topology, enabling scalable Bayesian inference with computational complexity quasi-linear in sample size. Through analytic moment matching, we propagate uncertainty through deep additive structures while maintaining interpretability. We use three example studies to demonstrate the framework's ability to distinguish aleatoric from epistemic uncertainty: calibration of heteroscedastic measurement noise in fluid flow reconstruction, quantification of prediction confidence degradation in multi-step forecasting of advection-diffusion dynamics, and out-of-distribution detection in convolutional autoencoders. These results suggest Sparse Variational Gaussian Process Kolmogorov-Arnold Networks (SVGP KANs) is a promising architecture for uncertainty-aware learning in scientific machine learning.
翻译:Kolmogorov-Arnold网络已成为传统多层感知器的可解释替代方案。然而,标准实现缺乏许多科学应用所必需的系统性不确定性量化能力。本文提出一种将稀疏变分高斯过程推断与Kolmogorov-Arnold拓扑结构相融合的框架,实现计算复杂度与样本量呈拟线性关系的可扩展贝叶斯推断。通过解析矩匹配方法,我们在保持可解释性的同时,将不确定性沿深度加性结构进行传播。我们通过三项案例研究展示该框架区分偶然不确定性与认知不确定性的能力:流体流动重建中异方差测量噪声的校准、平流-扩散动力学多步预测中置信度衰减的量化,以及卷积自编码器的分布外检测。这些结果表明,稀疏变分高斯过程Kolmogorov-Arnold网络(SVGP KAN)是科学机器学习中实现不确定性感知学习的有前景的架构。