We explore generalizations of some integrated learning and optimization frameworks for data-driven contextual stochastic optimization that can adapt to heteroscedasticity. We identify conditions on the stochastic program, data generation process, and the prediction setup under which these generalizations possess asymptotic and finite sample guarantees for a class of stochastic programs, including two-stage stochastic mixed-integer programs with continuous recourse. We verify that our assumptions hold for popular parametric and nonparametric regression methods.
翻译:我们探讨某些综合学习和优化框架的概括化,这些综合学习和优化框架可适用于数据驱动的背景随机优化,以适应异质性。我们确定随机程序的条件、数据生成过程和预测设置,根据这些预测设置,这些常规拥有对一类随机程序无症状和有限的抽样保障,包括两阶段随机混凝土程序,并不断追索。我们核查我们的假设是否支持流行的参数和非参数回归方法。