Recently, the applications of the methodologies of Reinforcement Learning (RL) to NP-Hard Combinatorial optimization problems have become a popular topic. This is essentially due to the nature of the traditional combinatorial algorithms, often based on a trial-and-error process. RL aims at automating this process. At this regard, this paper focuses on the application of RL for the Vehicle Routing Problem (VRP), a famous combinatorial problem that belongs to the class of NP-Hard problems. In this work, first, the problem is modeled as a Markov Decision Process (MDP) and then the PPO method (which belongs to the Actor-Critic class of Reinforcement learning methods) is applied. In a second phase, the neural architecture behind the Actor and Critic has been established, choosing to adopt a neural architecture based on the Convolutional neural networks, both for the Actor and the Critic. This choice resulted in effectively addressing problems of different sizes. Experiments performed on a wide range of instances show that the algorithm has good generalization capabilities and can reach good solutions in a short time. Comparisons between the algorithm proposed and the state-of-the-art solver OR-TOOLS show that the latter still outperforms the Reinforcement learning algorithm. However, there are future research perspectives, that aim to upgrade the current performance of the algorithm proposed.
翻译:最近,将强化学习方法(RL)应用于NP-Hard组合优化问题已成为一个流行的话题,这主要是因为传统的组合算法(通常基于试入过程)的性质。RL的目标是使这一过程自动化。在这方面,本文件侧重于将RL应用于车辆路由问题(VRP),这是属于NP-Hard问题类别的著名组合问题。在这项工作中,首先将问题模拟为Markov决策过程(MDP),然后采用PPPO方法(属于强化学习方法的Acor-Critical类)。在第二阶段,已经建立了Actor和Crital背后的神经结构,选择采用基于Cultural神经网络的神经结构(VRP),这是属于NP-Hard问题类别的一个著名的组合问题。在这项工作中,首先将问题模拟为Markov决策过程(MDP),然后采用PPPO方法(属于加强学习方法的Acor-Critical 类)。在广泛的事例中进行实验表明,目前的算法具有良好的一般化能力,而后方算方法的比较可以显示后一种良好的方法的升级。但是,S-S-comma-trainal-trade-tralexlestr laxle lax-st laxal lax-st laxal laxal laxal lax