The combination of Monte Carlo methods and deep learning has recently led to efficient algorithms for solving partial differential equations (PDEs) in high dimensions. Related learning problems are often stated as variational formulations based on associated stochastic differential equations (SDEs), which allow the minimization of corresponding losses using gradient-based optimization methods. In respective numerical implementations it is therefore crucial to rely on adequate gradient estimators that exhibit low variance in order to reach convergence accurately and swiftly. In this article, we rigorously investigate corresponding numerical aspects that appear in the context of linear Kolmogorov PDEs. In particular, we systematically compare existing deep learning approaches and provide theoretical explanations for their performances. Subsequently, we suggest novel methods that can be shown to be more robust both theoretically and numerically, leading to substantial performance improvements.
翻译:蒙特卡洛方法与深层次学习相结合,最近产生了高效的算法,解决了高层次部分差异方程式(PDEs),相关的学习问题常常被称为基于相关随机差异方程式(SDEs)的变式配方,这可以使用基于梯度的优化方法尽量减少相应的损失,因此,在相应的数字实施中,关键是依赖具有低差异的适当的梯度估测器,以便准确和迅速地达到趋同。在本篇文章中,我们严格调查线性科尔莫戈罗夫PDEs中出现的相应数字方面。特别是,我们系统地比较现有的深层学习方法,并为它们的表现提供理论解释。随后,我们提出了在理论上和数字上都能够显示更稳健的新方法,从而导致大幅度的绩效改进。