This article studies deep neural network expression rates for optimal stopping problems of discrete-time Markov processes on high-dimensional state spaces. A general framework is established in which the value function and continuation value of an optimal stopping problem can be approximated with error at most $\varepsilon$ by a deep ReLU neural network of size at most $\kappa d^{\mathfrak{q}} \varepsilon^{-\mathfrak{r}}$. The constants $\kappa,\mathfrak{q},\mathfrak{r} \geq 0$ do not depend on the dimension $d$ of the state space or the approximation accuracy $\varepsilon$. This proves that deep neural networks do not suffer from the curse of dimensionality when employed to solve optimal stopping problems. The framework covers, for example, exponential L\'evy models, discrete diffusion processes and their running minima and maxima. These results mathematically justify the use of deep neural networks for numerically solving optimal stopping problems and pricing American options in high dimensions.
翻译:文章研究了高维状态空间离散时间 Markov 进程的最佳停止问题的深神经网络表达率。 建立了一个总体框架, 在一个总框架中, 最佳停止问题的值函数和持续值可以与错误相近, 最多以$$$ varepsilon$为单位, 由深ReLU 神经网络以最大体积为单位, 最多以$\kappa d ⁇ mathfrak{q} 进行深神经网络表达率研究, 例如, 指数L\ evy 模型、 离散扩散过程及其运行中的微型和最大值。 这些结果从数学上证明使用深神经网络从数字上解决最佳停止问题和在高维上为美国选项定价的精确度。