We consider ordinary differential equations (ODEs) which involve expectations of a random variable. These ODEs are special cases of McKean-Vlasov stochastic differential equations (SDEs). A plain vanilla Monte Carlo approximation method for such ODEs requires a computational cost of order $\varepsilon^{-3}$ to achieve a root-mean-square error of size $\varepsilon$. In this work we adapt recently introduced full history recursive multilevel Picard (MLP) algorithms to reduce this computational complexity. Our main result shows for every $\delta>0$ that the proposed MLP approximation algorithm requires only a computational effort of order $\varepsilon^{-(2+\delta)}$ to achieve a root-mean-square error of size $\varepsilon$.
翻译:我们考虑的是普通差异方程式(ODEs),它包含对随机变量的预期。这些数字方程式是麦肯-弗拉索夫随机差异方程式(SDEs)的特殊例子。对于这种数字方程式,普通香草蒙特卡洛近似法要求计算成本为$\varepsilon* *-3}$(美元),以达到一个大小的根平均值错误。在这项工作中,我们最近采用了完整的历史循环多级皮卡(MLP)算法来减少这一计算复杂性。我们的主要结果显示,每花费0.0美元,拟议的MLP近似方程算法只需要计算成本为$\varepsilon*-(2 ⁇ delta)$(美元),才能达到一个大小为$\varepsilon_(2 ⁇ delta)}(美元)的根平均值错误。