We present a discretization-free scalable framework for solving a large class of mass-conserving partial differential equations (PDEs), including the time-dependent Fokker-Planck equation and the Wasserstein gradient flow. The main observation is that the time-varying velocity field of the PDE solution needs to be self-consistent: it must satisfy a fixed-point equation involving the flow characterized by the same velocity field. By parameterizing the flow as a time-dependent neural network, we propose an end-to-end iterative optimization framework called self-consistent velocity matching to solve this class of PDEs. Compared to existing approaches, our method does not suffer from temporal or spatial discretization, covers a wide range of PDEs, and scales to high dimensions. Experimentally, our method recovers analytical solutions accurately when they are available and achieves comparable or better performance in high dimensions with less training time compared to recent large-scale JKO-based methods that are designed for solving a more restrictive family of PDEs.
翻译:我们提出了一个解决大量大规模节省部分差异方程式(PDE)的不离散的可伸缩框架,包括基于时间的Fokker-Planck方程式和瓦塞尔斯坦梯度流。主要观察是,PDE解决方案的时间变化速度字段需要自成一体:它必须满足一个固定点方程式,涉及以同一速度字段为特征的流动。通过将流动参数作为时间依赖的神经网络,我们提议了一个终端到终端的迭代优化框架,称为自一致速度匹配,以解决这一类PDE。与现有方法相比,我们的方法并不受到时间或空间分解的影响,涵盖广泛的PDEs和高度尺度。实验说,我们的方法在具备分析方程式时准确恢复分析解决方案,在高维度上取得可比或更好的表现,培训时间比最近设计用来解决更具限制性的PDE系列的大型JKO方法要少一些。