We address the problem of production planning and distribution in multi-echelon supply chains. We consider uncertain demands and lead times which makes the problem stochastic and non-linear. A Markov Decision Process formulation and a Non-linear Programming model are presented. As a sequential decision-making problem, Deep Reinforcement Learning (RL) is a possible solution approach. This type of technique has gained a lot of attention from Artificial Intelligence and Optimization communities in recent years. Considering the good results obtained with Deep RL approaches in different areas there is a growing interest in applying them in problems from the Operations Research field. We have used a Deep RL technique, namely Proximal Policy Optimization (PPO2), to solve the problem considering uncertain, regular and seasonal demands and constant or stochastic lead times. Experiments are carried out in different scenarios to better assess the suitability of the algorithm. An agent based on a linearized model is used as a baseline. Experimental results indicate that PPO2 is a competitive and adequate tool for this type of problem. PPO2 agent is better than baseline in all scenarios with stochastic lead times (7.3-11.2%), regardless of whether demands are seasonal or not. In scenarios with constant lead times, the PPO2 agent is better when uncertain demands are non-seasonal (2.2-4.7%). The results show that the greater the uncertainty of the scenario, the greater the viability of this type of approach.
翻译:我们处理多层次供应链的生产规划和分配问题。我们考虑的是使问题具有随机性和非线性的需求和周转时间不确定的问题。我们介绍了Markov 决策程序制定和非线性规划模式。作为一个顺序决策问题,深强化学习(RL)是一个可能的解决方案。近年来,这种技术在人工智能和优化社区中得到了大量的注意。考虑到在不同地区深RL方法所取得的良好结果,人们越来越有兴趣在操作研究领域的问题中应用这些结果。我们使用了深RL技术,即Proximal政策优化(PPPOO2),以解决考虑到不确定性、定期和季节性需求以及恒定或随机性领先时间的问题。在不同情况下进行了实验,以更好地评估算法的适合性。以线性模型为基础的一种媒介被用作基线。实验结果表明,PPO2是这种类型问题的竞争性和适当工具。PPPO2代理商比所有设想的基线要好,即Proximal政策优化(PPO2),无论季节性要求的不确定性类型是多少。在季节性决定型的假设中,这种不确定性是更难的情景是更准确的。(7.2),无论季节性要求是更准确的时期是更准确的。