Motivated by the poor performance of cross-validation in settings where data are scarce, we propose a novel estimator of the out-of-sample performance of a policy in data-driven optimization.Our approach exploits the optimization problem's sensitivity analysis to estimate the gradient of the optimal objective value with respect to the amount of noise in the data and uses the estimated gradient to debias the policy's in-sample performance. Unlike cross-validation techniques, our approach avoids sacrificing data for a test set, utilizes all data when training and, hence, is well-suited to settings where data are scarce. We prove bounds on the bias and variance of our estimator for optimization problems with uncertain linear objectives but known, potentially non-convex, feasible regions. For more specialized optimization problems where the feasible region is "weakly-coupled" in a certain sense, we prove stronger results. Specifically, we provide explicit high-probability bounds on the error of our estimator that hold uniformly over a policy class and depends on the problem's dimension and policy class's complexity. Our bounds show that under mild conditions, the error of our estimator vanishes as the dimension of the optimization problem grows, even if the amount of available data remains small and constant. Said differently, we prove our estimator performs well in the small-data, large-scale regime. Finally, we numerically compare our proposed method to state-of-the-art approaches through a case-study on dispatching emergency medical response services using real data. Our method provides more accurate estimates of out-of-sample performance and learns better-performing policies.
翻译:由于数据稀少环境中交叉校准的性能不佳,我们提出一个新的数据优化政策超模版性业绩估计标准。我们的方法利用优化问题的灵敏度分析来估计数据噪音量最佳目标值的梯度,并使用估计梯度来贬低政策的全模性性性能。与交叉校准技术不同,我们的方法避免牺牲测试数据集的数据,在培训时使用所有数据,因此,完全适合数据稀缺的环境。我们的方法证明了我们估算的偏差和差异,即优化问题定线性目标不确定,但已知的、可能非对等的、可行的区域。对于更专业化的优化问题,从某种意义上说,“重相交相交”的政策性能降低。我们的方法与交叉校准技术不同,我们的方法避免牺牲测试数据集的错误,在培训期间使用所有的数据,因此完全适合数据稀缺的设置。我们估算数据的偏差性能和差异性能越小,我们的数据越低,我们的数据越容易理解,我们的数据越难理解。我们的数据越容易理解,我们的数据越容易理解,我们的数据越多地评估了。