Stochastic differential equations are commonly used to describe the evolution of stochastic processes. The state uncertainty of such processes is best represented by the probability density function (PDF), whose evolution is governed by the Fokker-Planck partial differential equation (FP-PDE). However, it is generally infeasible to solve the FP-PDE in closed form. In this work, we show that physics-informed neural networks (PINNs) can be trained to approximate the solution PDF. Our main contribution is the analysis of PINN approximation error: we develop a theoretical framework to construct tight error bounds using PINNs. In addition, we derive a practical error bound that can be efficiently constructed with standard training methods. We discuss that this error-bound framework generalizes to approximate solutions of other linear PDEs. Empirical results on nonlinear, high-dimensional, and chaotic systems validate the correctness of our error bounds while demonstrating the scalability of PINNs and their significant computational speedup in obtaining accurate PDF solutions compared to the Monte Carlo approach.
翻译:随机微分方程通常用于描述随机过程的演化。此类过程的状态不确定性可通过概率密度函数进行最优表征,其演化由Fokker-Planck偏微分方程所支配。然而,以闭合形式求解FP-PDE通常不可行。本研究表明,物理信息神经网络可通过训练来近似求解概率密度函数。我们的核心贡献在于对PINN近似误差的分析:我们建立了一个理论框架,利用PINN构建紧致的误差界。此外,我们推导出可通过标准训练方法高效构建的实用误差界。该误差界框架可推广至其他线性偏微分方程的近似求解。在非线性、高维及混沌系统上的实证结果验证了误差界的正确性,同时证明了PINN的可扩展性及其相比蒙特卡洛方法在获取精确概率密度函数解时显著的计算加速优势。