We show the equivalence of discrete choice models and a forest of binary decision trees. This suggests that standard machine learning techniques based on random forests can serve to estimate discrete choice models with an interpretable output: the underlying trees can be viewed as the internal choice process of customers. Our data-driven theoretical results show that random forests can predict the choice probability of any discrete choice model consistently. Moreover, our algorithm predicts unseen assortments with mechanisms and errors that can be theoretically analyzed. We also prove that the splitting criterion in random forests, the Gini index, is capable of recovering preference rankings of customers. The framework has unique practical advantages: it can capture behavioral patterns such as irrationality or sequential searches; it handles nonstandard formats of training data that result from aggregation; it can measure product importance based on how frequently a random customer would make decisions depending on the presence of the product; it can also incorporate price information and customer features. Our numerical results show that using random forests to estimate customer choices can outperform the best parametric models in synthetic and real datasets when presented with enough data or when the underlying discrete choice model cannot be correctly specified by existing parametric models.
翻译:我们显示了离散选择模型和二进制决定树林的等效。 这表明基于随机森林的标准机器学习技术可以用来估计离散选择模型,并具有可解释的产出: 基础树可以被视为客户的内部选择过程。 我们的数据驱动理论结果显示,随机森林可以一致地预测任何离散选择模型的选择概率。 此外,我们的算法可以预测隐蔽的分类方式以及可进行理论分析的机制和错误。 我们还证明随机森林的分解标准,即吉尼指数,能够恢复客户的偏好等级。 这个框架具有独特的实际优势:它可以捕捉行为模式,如不合理性或顺序搜索;它可以处理非标准的培训格式的培训数据,而这些数据来自聚合;它可以根据随机客户根据产品的存在来作出决定的频率来衡量产品的重要性;它也可以包含价格信息和客户特征。 我们的数字结果显示,使用随机森林来估计客户选择可以比合成和真实数据集中的最佳参数模型更差,当提出足够的数据时,或者当潜在的离散选择模型无法被现有参数模型正确规定时。