Combinatorial optimisation problems are ubiquitous in artificial intelligence. Designing the underlying models, however, requires substantial expertise, which is a limiting factor in practice. The models typically consist of hard and soft constraints, or combine hard constraints with an objective function. We introduce a novel setting for learning combinatorial optimisation problems from contextual examples. These positive and negative examples show - in a particular context - whether the solutions are good enough or not. We develop our framework using the MAX-SAT formalism as it is simple yet powerful setting having these features. We study the learnability of MAX-SAT models. Our theoretical results show that high-quality MAX-SAT models can be learned from contextual examples in the realisable and agnostic settings, as long as the data satisfies an intuitive "representativeness" condition. We also contribute two implementations based on our theoretical results: one leverages ideas from syntax-guided synthesis while the other makes use of stochastic local search techniques. The two implementations are evaluated by recovering synthetic and benchmark models from contextual examples. The experimental results support our theoretical analysis, showing that MAX-SAT models can be learned from contextual examples. Among the two implementations, the stochastic local search learner scales much better than the syntax-guided implementation while providing comparable or better models.
翻译:合成优化问题在人工智能中普遍存在。 但是,设计基础模型需要大量的专门知识,这是实践中的一个限制因素。模型通常包括硬性和软性制约,或将硬性制约与客观功能相结合。我们从背景实例中引入了学习组合优化问题的新型环境。这些积极和消极的例子表明,在特定背景下,解决方案是否足够好。我们利用MAX-SAT正规化的简单而有力的设置来开发我们的框架。我们研究了MAX-SAT模型的可学习性,这是实践中的一个限制因素。我们的理论结果表明,高质量的MAX-SAT模型可以从现实和不可知性环境中的背景实例中学习,只要数据符合直观的“代表性”条件。我们还根据我们的理论结果贡献了两种执行:一种是利用合成税制合成的理念,而另一种则是利用随机化的本地搜索技术。我们通过从背景实例中恢复合成和基准模型来评估两种实施情况。我们的两个实验结果支持了我们的理论分析,即,只要数据符合直观的“代表性”的“代表性”模型可以提供更好的背景搜索模型,而另一种则是通过从背景模型中学习更好的分析。