Neural Combinatorial Optimization approaches have recently leveraged the expressiveness and flexibility of deep neural networks to learn efficient heuristics for hard Combinatorial Optimization (CO) problems. However, most of the current methods lack generalization: for a given CO problem, heuristics which are trained on instances with certain characteristics underperform when tested on instances with different characteristics. While some previous works have focused on varying the training instances properties, we postulate that a one-size-fit-all model is out of reach. Instead, we formalize solving a CO problem over a given instance distribution as a separate learning task and investigate meta-learning techniques to learn a model on a variety of tasks, in order to optimize its capacity to adapt to new tasks. Through extensive experiments, on two CO problems, using both synthetic and realistic instances, we show that our proposed meta-learning approach significantly improves the generalization of two state-of-the-art models.
翻译:神经组合优化方法最近利用了深层神经神经网络的清晰度和灵活性,学习硬组合优化(CO)问题的有效湿度,然而,目前大多数方法缺乏一般化:对于特定CO问题,对于某些特点在测试具有不同特点的事例时表现不佳的情况,经过培训的神经组合优化方法不够完善;虽然以前的一些工作侧重于不同的培训实例特性,但我们假设,单尺寸全套模式是无法达到的。相反,我们正式解决特定实例分配中的CO问题,将其作为一项单独的学习任务,并研究元学习技术,以学习关于各种任务的模型,以便优化其适应新任务的能力。我们通过广泛实验,利用合成和现实实例,对两个CO问题进行了广泛的实验,我们表明,我们拟议的元学习方法大大改进了两种最先进的模式的普遍化。