Robust optimization has been established as a leading methodology to approach decision problems under uncertainty. To derive a robust optimization model, a central ingredient is to identify a suitable model for uncertainty, which is called the uncertainty set. An ongoing challenge in the recent literature is to derive uncertainty sets from given historical data that result in solutions that are robust regarding future scenarios. In this paper we use an unsupervised deep learning method to learn and extract hidden structures from data, leading to non-convex uncertainty sets and better robust solutions. We prove that most of the classical uncertainty classes are special cases of our derived sets and that optimizing over them is strongly NP-hard. Nevertheless, we show that the trained neural networks can be integrated into a robust optimization model by formulating the adversarial problem as a convex quadratic mixed-integer program. This allows us to derive robust solutions through an iterative scenario generation process. In our computational experiments, we compare this approach to a similar approach using kernel-based support vector clustering. We find that uncertainty sets derived by the unsupervised deep learning method find a better description of data and lead to robust solutions that outperform the comparison method both with respect to objective value and feasibility.
翻译:强力优化已被确立为在不确定情况下处理决策问题的主导方法。 要形成一个稳健优化模式, 核心要素是确定一个适合不确定性的模式, 称为不确定性组。 最近文献中的一项持续挑战是, 从特定的历史数据中得出不确定性组, 从而形成对未来设想方案具有稳健性的解决办法。 在本文件中, 我们使用未经监督的深层次学习方法, 从数据中学习和提取隐藏的结构, 导致非凝固的不确定性组和更好的稳健解决方案。 我们证明, 古典不确定性类大多属于我们衍生的数据集的特殊案例, 并且对它们的优化是很强的NP- 硬性。 然而, 我们表明, 受过训练的神经网络可以通过将对抗性问题发展成一个正统的优化模型, 从而形成一个对立性方形混合整数程序。 这使我们能够通过反复的情景生成过程获得稳健的解决方案。 在我们的计算实验中, 我们用内核支持病媒集成的类似方法来比较这一方法。 我们发现, 由未经监督的深层次学习方法得出的不确定性组, 找到更好的数据描述, 并导致强有力的解决办法, 超越客观和可行性的比较方法。