In this paper we present an end-to-end framework for addressing the problem of dynamic pricing (DP) on E-commerce platform using methods based on deep reinforcement learning (DRL). By using four groups of different business data to represent the states of each time period, we model the dynamic pricing problem as a Markov Decision Process (MDP). Compared with the state-of-the-art DRL-based dynamic pricing algorithms, our approaches make the following three contributions. First, we extend the discrete set problem to the continuous price set. Second, instead of using revenue as the reward function directly, we define a new function named difference of revenue conversion rates (DRCR). Third, the cold-start problem of MDP is tackled by pre-training and evaluation using some carefully chosen historical sales data. Our approaches are evaluated by both offline evaluation method using real dataset of Alibaba Inc., and online field experiments starting from July 2018 with thousands of items, lasting for months on Tmall.com. To our knowledge, there is no other DP field experiment using DRL before. Field experiment results suggest that DRCR is a more appropriate reward function than revenue, which is widely used by current literature. Also, continuous price sets have better performance than discrete sets and our approaches significantly outperformed the manual pricing by operation experts.
翻译:在本文中,我们提出了一个解决电子商务平台动态定价问题的端对端框架,即利用基于深层强化学习(DRL)的方法解决电子商务平台动态定价问题。通过使用四组不同的商业数据来代表每个时期的状态,我们将动态定价问题模型作为Markov决定程序(MDP),与基于最先进的基于DRL的动态定价算法相比,我们的方法可以作出以下三项贡献。首先,我们将单独设置的问题扩大到连续价格套套。第二,我们没有直接使用收入作为奖励功能,而是确定了一个新的功能,称为收入转换率的差异(DRCR)。第三,通过培训前和评价,使用一些精心选择的历史销售数据来解决MDP的冷启动问题。我们的方法通过离线评价方法,使用基于Alibaba Inc公司真实数据集,以及从2018年7月开始的在线实地实验,对数千项项目进行为期数月的实验。据我们所知,没有其他DP实地实验使用DRL作为奖励功能,而是直接界定收入转换率(DRCRR)的新功能。第三,对MDP的冷启动问题采用预先培训和评估和评价方法,采用一些历史销售价格的方法。此外,我们使用的也是由不断调整的定价方法,也使用。