The method of choice for integrating the time-dependent Fokker-Planck equation in high-dimension is to generate samples from the solution via integration of the associated stochastic differential equation. Here, we study an alternative scheme based on integrating an ordinary differential equation that describes the flow of probability. Acting as a transport map, this equation deterministically pushes samples from the initial density onto samples from the solution at any later time. Unlike integration of the stochastic dynamics, the method has the advantage of giving direct access to quantities that are challenging to estimate from trajectories alone, such as the probability current, the density itself, and its entropy. The probability flow equation depends on the gradient of the logarithm of the solution (its "score"), and so is a-priori unknown. To resolve this dependence, we model the score with a deep neural network that is learned on-the-fly by propagating a set of samples according to the instantaneous probability current. We show theoretically that the proposed approach controls the KL divergence from the learned solution to the target, while learning on external samples from the stochastic differential equation does not control either direction of the KL divergence. Empirically, we consider several high-dimensional Fokker-Planck equations from the physics of interacting particle systems. We find that the method accurately matches analytical solutions when they are available as well as moments computed via Monte-Carlo when they are not. Moreover, the method offers compelling predictions for the global entropy production rate that out-perform those obtained from learning on stochastic trajectories, and can effectively capture non-equilibrium steady-state probability currents over long time intervals.
翻译:将基于时间的 Fokker- Planck 方程式整合为高二进制的选项方法, 是通过整合相关随机偏差方程式, 从溶液中生成样本。 在这里, 我们研究一个基于整合普通差异方程式的替代方案, 描述概率流。 作为运输图, 此方程式将初始密度的样本决定性地从初始密度推到任何以后的溶液样本中。 与抽查动态的整合不同, 该方程式的优势在于让直接存取难以从单轨线( 如概率当前、 密度本身以及其诱变等) 中估算出样本。 概率流方程式的方程式公式取决于解决方案的对数梯度( 其“ 分数 ” ) 的梯度。 为了解决这一依赖性, 我们用一个深的神经网络来模拟这个分数, 通过根据瞬间概率查找一系列样本, 方法的好处是直接存取的量。 我们从理论上显示, 所提议的方法可以控制 KL 的解度与目标的差异,, 而不是从当前非概率值的非概率值的非概率值 。 。 而从外部的正数 方程式的正序 方程式的正序 解 解 的 方程的 方程的 方程式的计算法则则则会学中学习着, 。