Optimal values and solutions of empirical approximations of stochastic optimization problems can be viewed as statistical estimators of their true values. From this perspective, it is important to understand the asymptotic behavior of these estimators as the sample size goes to infinity, which is both of theoretical as well as practical interest. This area of study has a long tradition in stochastic programming. However, the literature is lacking consistency analysis for problems in which the decision variables are taken from an infinite-dimensional space, which arise in optimal control, scientific machine learning, and statistical estimation. By exploiting the typical problem structures found in these applications that give rise to hidden norm compactness properties for solution sets, we prove consistency results for nonconvex risk-averse stochastic optimization problems formulated in infinite-dimensional space. The proof is based on several crucial results from the theory of variational convergence. The theoretical results are demonstrated for several important problem classes arising in the literature.
翻译:最佳值和最佳优化问题经验近似法的优化价值和解决办法可被视为其真正价值的统计估计者。 从这个角度看,必须理解这些估计者在样本大小达到无限性时的无症状行为,这既是理论的,也是实际的。这个研究领域在随机性编程方面有着悠久的传统。然而,文献缺乏对从无限空间取决定变量的问题的一致性分析,这些问题产生于最佳控制、科学机器学习和统计估计。通过利用这些应用中发现的典型问题结构,这些结构为解决方案组提供了隐蔽的常规紧凑性特性,我们证明在无限空间形成的非对立式风险反相近性优化问题的一致性结果。证据基于变异性趋同理论的若干关键结果。理论结果为文献中出现的若干重要问题类别所展示。