In financial engineering, prices of financial products are computed approximately many times each trading day with (slightly) different parameters in each calculation. In many financial models such prices can be approximated by means of Monte Carlo (MC) simulations. To obtain a good approximation the MC sample size usually needs to be considerably large resulting in a long computing time to obtain a single approximation. In this paper we introduce a new approximation strategy for parametric approximation problems including the parametric financial pricing problems described above. A central aspect of the approximation strategy proposed in this article is to combine MC algorithms with machine learning techniques to, roughly speaking, learn the random variables (LRV) in MC simulations. In other words, we employ stochastic gradient descent (SGD) optimization methods not to train parameters of standard artificial neural networks (ANNs) but to learn random variables appearing in MC approximations. We numerically test the LRV strategy on various parametric problems with convincing results when compared with standard MC simulations, Quasi-Monte Carlo simulations, SGD-trained shallow ANNs, and SGD-trained deep ANNs. Our numerical simulations strongly indicate that the LRV strategy might be capable to overcome the curse of dimensionality in the $L^\infty$-norm in several cases where the standard deep learning approach has been proven not to be able to do so. This is not a contradiction to lower bounds established in the scientific literature because this new LRV strategy is outside of the class of algorithms for which lower bounds have been established in the scientific literature. The proposed LRV strategy is of general nature and not only restricted to the parametric financial pricing problems described above, but applicable to a large class of approximation problems.
翻译:在金融工程中,金融产品的价格每交易日计算约多次,每次计算(略微)不同的参数。在许多金融模型中,这种价格可以通过Monte Carlo(MC)模拟法来近似随机变量(LRV),在许多金融模型中,这种价格可以通过Monte Carlo(Mc)模拟法加以近似。为了取得一个好近似,MC样本的规模通常需要相当大的时间才能获得单一近似。在本文件中,我们为参数接近问题引入了新的近似战略,包括上面描述的参数金融定价问题。本文章中提议的近似战略的一个核心方面是将MC算法与机器学习技术结合起来,以大致地了解MC模拟中的随机变量(LRV)。换句话说,我们采用随机偏差梯梯梯梯梯梯梯梯梯脱落(SGD)优化法的方法,不是为了训练标准的人工神经网络(ANNS)参数,而是为了了解随机随机变量战略。我们用数字测试各种参数问题,与标准的MMC模拟方法相比,Quasi-ML模拟、S-ML模拟、S-lag-lag-trag-la-trag-lagyal 和Seral-la-la-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-lax-cuild-cisal-s-lax-lax-lax-lax-cisal-lax-lax-lax-cisalisal-cisal-s-cisalisalisal-cisal-s-s-s-s-cud-s-s-s-s-s-s-s-cal-cal-cudisl-s-cud-cal-cal-cal-cal-cal-cal-s-s-s-s-s-s-s-s-s-s-s-s-lax-s-s-s-s-s-s-s-s-l-l-l-l-l-s-l-l-l-l-l-l-cal-cal-l-