We extend the Deep Galerkin Method (DGM) introduced in Sirignano and Spiliopoulos (2018)} to solve a number of partial differential equations (PDEs) that arise in the context of optimal stochastic control and mean field games. First, we consider PDEs where the function is constrained to be positive and integrate to unity, as is the case with Fokker-Planck equations. Our approach involves reparameterizing the solution as the exponential of a neural network appropriately normalized to ensure both requirements are satisfied. This then gives rise to nonlinear a partial integro-differential equation (PIDE) where the integral appearing in the equation is handled by a novel application of importance sampling. Secondly, we tackle a number of Hamilton-Jacobi-Bellman (HJB) equations that appear in stochastic optimal control problems. The key contribution is that these equations are approached in their unsimplified primal form which includes an optimization problem as part of the equation. We extend the DGM algorithm to solve for the value function and the optimal control \simultaneously by characterizing both as deep neural networks. Training the networks is performed by taking alternating stochastic gradient descent steps for the two functions, a technique inspired by the policy improvement algorithms (PIA).
翻译:我们在Sirignano和Spiliopoulos (2018年) 中推广了在Sirignano和Spiliopoulos (2018年) 中引入的深Galerkin 方法(DGM), 以解决在最佳随机控制和平均野外游戏背景下产生的一些部分差异方程式(PDEs)。 首先, 我们考虑PDEs, 其功能受约束, 且与Fokker- Planck 等方程式一样, 其功能被限制为正方程式, 并被整合到统一中。 我们的方法是重新配置解决方案, 因为它是一个正常的神经网络的指数, 以确保满足这两种要求。 然后, 由此产生一个非线性化的局部内分异方程式(PIDE), 即方程式中的整体部分由重要性抽样取样的新应用处理。 第二, 我们考虑一些功能被限制为正向- Jacobi- Bellman (HJJB) 的方程式, 等式最佳控制方程式, 由感动性平级化的双向性平级策略, 培训是以感性平级化的双向性平级策略,, 以感性平级化的平级化的平级化的平级化的网络, 。