This paper leverages recent developments in reinforcement learning and deep learning to solve the supply chain inventory management (SCIM) problem, a complex sequential decision-making problem consisting of determining the optimal quantity of products to produce and ship to different warehouses over a given time horizon. A mathematical formulation of the stochastic two-echelon supply chain environment is given, which allows an arbitrary number of warehouses and product types to be managed. Additionally, an open-source library that interfaces with deep reinforcement learning (DRL) algorithms is developed and made publicly available for solving the SCIM problem. Performances achieved by state-of-the-art DRL algorithms are compared through a rich set of numerical experiments on synthetically generated data. The experimental plan is designed and performed, including different structures, topologies, demands, capacities, and costs of the supply chain. Results show that the PPO algorithm adapts very well to different characteristics of the environment. The VPG algorithm almost always converges to a local maximum, even if it typically achieves an acceptable performance level. Finally, A3C is the fastest algorithm, but just like VPG, it never achieves the best performance when compared to PPO. In conclusion, numerical experiments show that DRL performs consistently better than standard reorder policies, such as the static (s, Q)-policy. Thus, it can be considered a practical and effective option for solving real-world instances of the stochastic two-echelon SCIM problem.
翻译:本文利用了最近在强化学习和深层次学习方面的发展动态,以解决供应链库存管理(SCIM)问题,这是一个复杂的连续决策问题,包括确定产品的最佳数量,以便在一定的时间范围内生产和运到不同仓库。给出了对二层供应链环境的数学配方,允许任意多处仓库和产品类型加以管理。此外,开发了一个与深层强化学习(DRL)算法(DRL)算法相联的开放源图书馆,用于解决供应链库存管理(SCIM)问题。最新DRL算法的绩效通过对合成生成的数据进行大量数字实验加以比较。实验计划的设计和实施方式包括不同的结构、表层、需求、能力和供应链的成本。结果显示PPPO算法非常适应环境的不同特点。 VPG算法几乎总是与当地最高值相匹配,即使通常达到可接受的性能水平。最后,A3C是最快的算法,但与VPGGI一样,它从未达到与SDR相比的最佳性表现, 相对于标准的SPO的S-deal-deal 这样的标准性实验,它可以比标准的SDR-deal-deal-deal-pal-pal ex ex acurreval be ex ex ex ex ex as as ex ex ex ex ex ex ex as as ex ex ex ex ex ex ex ex ex ex exal ex ex ex ex ex ex lautal ex lautal ex ex ex ex ex ex ex ex ex ex ex ex laubal ex ex ex ex lautal ex ex ex ex ex ex exal exal ex ex ex ex ex ex ex exal exal exal exal ex exal ex ex ex ex ex ex exal exal exal exal ex ex ex ex ex ex ex ex exal exal exal exal exal exal exal exal exal ex ex