High-dimensional parabolic partial differential equations (PDEs) often involve large-scale Hessian matrices, which are computationally expensive for deep learning methods relying on automatic differentiation to compute derivatives. This work aims to address this issue. In the proposed method, the PDE is reformulated into a martingale formulation, which allows the computation of loss functions to be derivative-free and parallelized in time-space domain. Then, the martingale formulation is enforced using a Galerkin method via adversarial learning techniques, which eliminate the need of computing conditional expectations in the margtingale property. This method is further extended to solve Hamilton-Jacobi-Bellman (HJB) equations and the associated Stochastic optimal control problems, enabling the simultaneous solution of the value function and optimal feedback control in a derivative-free manner. Numerical results demonstrate the effectiveness and efficiency of the proposed method, capable of solving HJB equations accurately with dimensionality up to 10,000.
翻译:暂无翻译