In Constraint Programming, constraints are usually represented as predicates allowing or forbidding combinations of values. However, some Constraint-Based Local Search algorithms exploit a finer representation: error functions. By associating a function to each constraint type to evaluate the quality of an assignment, it extends the expressiveness of regular Constraint Satisfaction Problem/Constrained Optimization Problem formalisms. This comes with a heavy price: it makes problem modeling significantly harder. Indeed, one must provide a set of error functions that are not always easy to define. Here, we propose a method to automatically learn an error function corresponding to a constraint, given a function deciding if assignments are valid or not. This is, to the best of our knowledge, the first attempt to automatically learn error functions for hard constraints. Our method aims to learn error functions in a supervised fashion, trying to reproduce the Hamming distance, by using a variant of neural networks we named Interpretable Compositional Networks, allowing us to get interpretable results, unlike regular artificial neural networks. We run experiments on 5 different constraints to show its versatility. Experiments show that functions learned on small dimensions scale on high dimensions, outputting a perfect or near-perfect Hamming distance for most tested constraints.
翻译:在约束性编程中,限制通常被表现为允许或禁止组合值的前提。然而,一些基于约束的本地搜索算法则利用了一种精细的表达方式:错误功能。通过将函数与每种约束类型联系起来来评估任务的质量,它扩大了常规约束性满意度问题/限制优化格式主义的清晰度。这带来了高昂的代价:它使得问题建模变得非常困难。事实上,我们必须提供一套不易定义的错误函数。这里,我们建议一种方法,以自动学习一个与限制相对应的错误函数,而这种函数则决定任务是否有效。根据我们的知识,这是为硬性制约自动学习错误函数的首次尝试。我们的方法旨在以有监督的方式学习错误功能,尝试复制断层的距离,方法是使用我们命名的内线网络变式结构网络,让我们获得可解释的结果,这与普通的人工神经网络不同。我们在5种不同的制约上进行实验,以显示其是否有效。根据我们的知识,实验显示在近距离范围内所学到的最先进的产出功能。