This paper studies a stochastic variant of the vehicle routing problem (VRP) where both customer locations and demands are uncertain. In particular, potential customers are not restricted to a predefined customer set but are continuously spatially distributed in a given service area. The objective is to maximize the served demands while fulfilling vehicle capacities and time restrictions. We call this problem the VRP with stochastic customers and demands (VRPSCD). For this problem, we first propose a Markov Decision Process (MDP) formulation representing the classical centralized decision-making perspective where one decision-maker establishes the routes of all vehicles. While the resulting formulation turns out to be intractable, it provides us with the ground to develop a new MDP formulation of the VRPSCD representing a decentralized decision-making framework, where vehicles autonomously establish their own routes. This new formulation allows us to develop several strategies to reduce the dimension of the state and action spaces, resulting in a considerably more tractable problem. We solve the decentralized problem via Reinforcement Learning, and in particular, we develop a Q-learning algorithm featuring state-of-the-art acceleration techniques such as Replay Memory and Double Q Network. Computational results show that our method considerably outperforms two commonly adopted benchmark policies (random and heuristic). Moreover, when comparing with existing literature, we show that our approach can compete with specialized methods developed for the particular case of the VRPSCD where customer locations and expected demands are known in advance. Finally, we show that the value functions and policies obtained by our algorithm can be easily embedded in Rollout algorithms, thus further improving their performances.
翻译:本文研究车辆路由问题(VRP)的变式,即客户地点和需求都不确定,特别是潜在客户不局限于预先确定的客户群,而是在特定服务领域持续进行空间分布。目标是在满足车辆能力和时间限制的同时,最大限度地满足服务需求,同时满足车辆能力和时间限制。我们称这个问题为具有随机客户和需求(VRPSCD)的VRP(VRP),为此,我们首先提出一个代表典型的集中决策观点的Markov决策进程(MDP)的提法,其中一位决策人可以确定所有车辆的路线。由此产生的提法虽然不易操作,但它为我们提供了一个基础,以开发一个新的MDP形式制定新的VRPSCD(VRPSCD)的MDP(VRPSCD)配方,代表一个分散的决策框架,使车辆自主地建立自己的路线。我们称之为“VRPSDRP(VDP)”,这个新提法让我们制定一些战略,以减少国家和行动空间的层面,从而导致一个相当容易处理的问题。我们通过强化学习的方法来解决分散的问题,特别是我们获得的Q-学习关于最先进的加速技术,我们所了解的CD加速的加速技术的计算方法,从而显示我们所熟悉的客户联盟的升级和双级的进度,从而显示我们所制定的方法。