This paper leverages recent developments in reinforcement learning and deep learning to solve the supply chain inventory management problem, a complex sequential decision-making problem consisting of determining the optimal quantity of products to produce and ship to different warehouses over a given time horizon. A mathematical formulation of the stochastic two-echelon supply chain environment is given, which allows an arbitrary number of warehouses and product types to be managed. Additionally, an open-source library that interfaces with deep reinforcement learning algorithms is developed and made publicly available for solving the inventory management problem. Performances achieved by state-of-the-art deep reinforcement learning algorithms are compared through a rich set of numerical experiments on synthetically generated data. The experimental plan is designed and performed, including different structures, topologies, demands, capacities, and costs of the supply chain. Results show that the PPO algorithm adapts very well to different characteristics of the environment. The VPG algorithm almost always converges to a local maximum, even if it typically achieves an acceptable performance level. Finally, A3C is the fastest algorithm, but just like the VPG, it never achieves the best performance when compared to PPO. In conclusion, numerical experiments show that deep reinforcement learning performs consistently better than standard inventory management strategies, such as the static (s, Q)-policy. Thus, it can be considered a practical and effective option for solving real-world instances of the stochastic two-echelon supply chain problem.
翻译:本文利用了最近在强化学习和深层次学习方面的发展,以解决供应链库存管理问题,这是一个复杂的连续决策问题,包括确定生产产品的最佳数量,并在特定时间范围内将产品运到不同的仓库。给出了对二梯层供应链环境的数学配方,允许任意多处仓库和产品类型加以管理。此外,开发了一个与深强化学习算法相连接的开放源图书馆,用于解决库存管理问题,并向公众开放。最新深入强化学习算法的绩效通过合成生成数据方面的大量数字实验加以比较。实验计划的设计和实施,包括不同的结构、结构、结构、需求、能力和供应链的成本。结果显示PPPO算法非常适合环境的不同特点。 VPG算法几乎总是与当地最高值相匹配,即使通常达到可接受的绩效水平。A3C是最快的快速算法,但与VPGG一样,在合成数据生成数据方面从未实现最佳的绩效。与PPO相比,试验计划的设计和进行,包括不同的结构、结构、需求、需求、能力和成本。结果显示PPPO的数学算法非常有效的水平。 测试显示它能够更好地进行真正的管理。