It is increasingly common to solve combinatorial optimisation problems that are partially-specified. We survey the case where the objective function or the relations between variables are not known or are only partially specified. The challenge is to learn them from available data, while taking into account a set of hard constraints that a solution must satisfy, and that solving the optimisation problem (esp. during learning) is computationally very demanding. This paper overviews four seemingly unrelated approaches, that can each be viewed as learning the objective function of a hard combinatorial optimisation problem: 1) surrogate-based optimisation, 2) empirical model learning, 3) decision-focused learning (`predict + optimise'), and 4) structured-output prediction. We formalise each learning paradigm, at first in the ways commonly found in the literature, and then bring the formalisations together in a compatible way using regret. We discuss the differences and interactions between these frameworks, highlight the opportunities for cross-fertilization and survey open directions.
翻译:解决部分指定的组合优化问题越来越常见。我们调查了目标功能或变量之间关系的不为人知或只是部分被具体化的情况。我们面临的挑战是如何从现有数据中学习这些功能或关系,同时考虑到一系列解决办法必须满足的硬性制约因素,而且解决优化问题(学习期间的偏好)在计算上要求很高。本文概述了四个似乎互不相关的方法,每个方法都可被视为学习硬组合优化问题客观功能:1)基于代理的优化,2)经验模型学习,3)注重决定的学习(“预科+优化”)和4)结构化产出预测。我们首先以文献中常见的方式将每一种学习模式正规化,然后以一致的方式利用遗憾将正规化结合起来。我们讨论这些框架之间的差异和互动,突出交叉交流和调查开放方向的机会。