Combinatorial optimization problem (COP) over graphs is a fundamental challenge in optimization. Reinforcement learning (RL) has recently emerged as a new framework to tackle these problems and has demonstrated promising results. However, most RL solutions employ a greedy manner to construct the solution incrementally, thus inevitably pose unnecessary dependency on action sequences and need a lot of problem-specific designs. We propose a general RL framework that not only exhibits state-of-the-art empirical performance but also generalizes to a variety class of COPs. Specifically, we define state as a solution to a problem instance and action as a perturbation to this solution. We utilize graph neural networks (GNN) to extract latent representations for given problem instances for state-action encoding, and then apply deep Q-learning to obtain a policy that gradually refines the solution by flipping or swapping vertex labels. Experiments are conducted on Maximum $k$-Cut and Traveling Salesman Problem and performance improvement is achieved against a set of learning-based and heuristic baselines.
翻译:组合优化问题(COP)相对于图表而言是一个根本性的优化挑战。强化学习(RL)最近作为解决这些问题的新框架出现,并展示出有希望的成果。然而,大多数RL解决方案采用贪婪的方式逐步构建解决方案,从而不可避免地对行动序列造成不必要的依赖,并需要大量针对具体问题的设计。我们提出了一个通用RL框架,不仅展示了最新经验性能,而且还向各类COP作了概括。具体地说,我们把问题实例和行动定义为对这一解决方案的干扰。我们利用图形神经网络(GNN)为国家行动编码的某个问题案例提取潜在代表,然后运用深度的Q学习来获取一项政策,通过翻转或交换脊椎标签逐步完善解决方案。在最大值$k$-Cut和旅行销售员问题上进行了实验,并且根据一套基于学习和超值的基线改进了绩效。