Incorporating side observations in decision making can reduce uncertainty and boost performance, but it also requires we tackle a potentially complex predictive relationship. While one may use off-the-shelf machine learning methods to separately learn a predictive model and plug it in, a variety of recent methods instead integrate estimation and optimization by fitting the model to directly optimize downstream decision performance. Surprisingly, in the case of contextual linear optimization, we show that the naive plug-in approach actually achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance. We show this by leveraging the fact that specific problem instances do not have arbitrarily bad near-dual-degeneracy. While there are other pros and cons to consider as we discuss and illustrate numerically, our results highlight a nuanced landscape for the enterprise to integrate estimation and optimization. Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.
翻译:将侧面观察纳入决策可以减少不确定性并提升绩效,但这也要求我们解决潜在的复杂预测关系。 尽管人们可能使用现成的机器学习方法来分别学习预测模型并将其插入,但最近采用的各种方法却将估计和优化结合起来,将模型与直接优化下游决策性能相匹配。 令人惊讶的是,在背景线性优化方面,我们显示天真的插座方法实际上取得了遗憾的趋同率,而这种速率比直接优化下游决策性能的方法要快得多得多。 我们通过利用以下事实来显示这一点:特定的问题情况并不具有任意的差近二元退化性。 在我们讨论和用数字说明时,我们的结果凸显了企业整合估计和优化的细微差别。我们的结果总体上对实践是积极的:预测性模型很容易快速地培训使用现有工具,简单解释,而且正如我们所显示的那样,导致决定效果很好。