Deep neural networks (DNNs) with ReLU activation function are proved to be able to express viscosity solutions of linear partial integrodifferental equations (PIDEs) on state spaces of possibly high dimension $d$. Admissible PIDEs comprise Kolmogorov equations for high-dimensional diffusion, advection, and for pure jump L\'{e}vy processes. We prove for such PIDEs arising from a class of jump-diffusions on $\mathbb{R}^d$, that for any compact $K\subset \mathbb{R}^d$, there exist constants $C,{\mathfrak{p}},{\mathfrak{q}}>0$ such that for every $\varepsilon \in (0,1]$ and for every $d\in \mathbb{N}$ the normalized (over $K$) DNN $L^2$-expression error of viscosity solutions of the PIDE is of size $\varepsilon$ with DNN size bounded by $Cd^{\mathfrak{p}}\varepsilon^{-\mathfrak{q}}$. In particular, the constant $C>0$ is independent of $d\in \mathbb{N}$ and of $\varepsilon \in (0,1]$ and depends only on the coefficients in the PIDE and the measure used to quantify the error. This establishes that ReLU DNNs can break the curse of dimensionality (CoD for short) for viscosity solutions of linear, possibly degenerate PIDEs corresponding to Markovian jump-diffusion processes. As a consequence of the employed techniques we also obtain that expectations of a large class of path-dependent functionals of the underlying jump-diffusion processes can be expressed without the CoD.
翻译:具有 ReLU 激活功能的深神经网络( DNN) 已被证明能够表达在可能高维的州空间上线性局部内分异方程式( PIDE) 的粘度解决方案 $d$。 允许 PIDE 包含高维扩散、 振荡和纯跳动的 Kolmogorov 方程式 。 我们证明这种PIDE 来自于 $\ mathbb{ R ⁇ d$ 的跳出级 。 对于任何 USK\ subset Pmathbb{R ⁇ d$, 存在恒定 $thfrak{ p} 。 允许 PIDE 包含常数, 用于高维扩散、 振动和纯跳动的 方程式 。 运行的Slormod\\\\\\\ madb} 标准( 美元以上) DNNNNNL2$- 表达式解决方案的误差值是 $drevlexplon$nal$nal=nal_clationrock a deal deal deal deal_ral_ral_ral dal_ dal_ dal_ dal_ disal_ disal dislational_ dislation dislational_ disal_ dislational_ disal_ disal_ disal_ disal_ disal_ disal_lational_ lexxxxxxxxxxxlational_ disal_ lial_ lial_ dal_ disal_