This paper develops algorithms for high-dimensional stochastic control problems based on deep learning and dynamic programming (DP). Differently from the classical approximate DP approach, we first approximate the optimal policy by means of neural networks in the spirit of deep reinforcement learning, and then the value function by Monte Carlo regression. This is achieved in the DP recursion by performance or hybrid iteration, and regress now or later/quantization methods from numerical probabilities. We provide a theoretical justification of these algorithms. Consistency and rate of convergence for the control and value function estimates are analyzed and expressed in terms of the universal approximation error of the neural networks. Numerical results on various applications are presented in a companion paper [2] and illustrate the performance of our algorithms.
翻译:本文根据深层学习和动态编程(DP),为高维随机控制问题制定了算法。与传统的近似DP方法不同,我们首先根据深层强化学习的精神,通过神经网络来估计最佳政策,然后是蒙特卡洛回归的价值功能。这通过性能或混合迭代在DP重现中实现,从数字概率中现在或以后/以后的回溯/量化方法实现。我们从理论上解释了这些算法。用神经网络的普遍近似错误来分析和表达控制和价值函数估计的一致性和趋同率。各种应用的数值结果载于一份配套文件[2],并说明了我们的算法的绩效。