From cutting costs to improving customer experience, forecasting is the crux of retail supply chain management (SCM) and the key to better supply chain performance. Several retailers are using AI/ML models to gather datasets and provide forecast guidance in applications such as Cognitive Demand Forecasting, Product End-of-Life, Forecasting, and Demand Integrated Product Flow. Early work in these areas looked at classical algorithms to improve on a gamut of challenges such as network flow and graphs. But the recent disruptions have made it critical for supply chains to have the resiliency to handle unexpected events. The biggest challenge lies in matching supply with demand. Reinforcement Learning (RL) with its ability to train systems to respond to unforeseen environments, is being increasingly adopted in SCM to improve forecast accuracy, solve supply chain optimization challenges, and train systems to respond to unforeseen circumstances. Companies like UPS and Amazon have developed RL algorithms to define winning AI strategies and keep up with rising consumer delivery expectations. While there are many ways to build RL algorithms for supply chain use cases, the OpenAI Gym toolkit is becoming the preferred choice because of the robust framework for event-driven simulations. This white paper explores the application of RL in supply chain forecasting and describes how to build suitable RL models and algorithms by using the OpenAI Gym toolkit.
翻译:从削减成本到改善客户经验,预测是零售供应链管理的关键所在,是改善供应链绩效的关键所在。一些零售商正在使用AI/ML模型收集数据集,并在认知需求预测、产品报废、预测和需求综合产品流动等应用中提供预测指导。这些领域的早期工作审视了传统算法,以改善网络流动和图表等一系列挑战。但最近出现的中断使得供应链必须具备应对意外事件的弹性。最大的挑战在于将供应与需求相匹配。强化学习(RL)及其培训系统应对意外环境的能力正在越来越多地被SCM采用,以提高预测准确性、解决供应链优化挑战并培训应对意外环境的系统。UPS和亚马逊等公司开发了RL算法,以界定赢得AI战略并跟上消费者不断增长的交付预期。虽然为供应链使用案例建立RL算法,但OpenAI Gym工具包正在成为首选选择,原因是在事件预测模型中采用稳健的R-L框架,以及利用R-L模型对事件进行适当的供应链模拟。