Motivated by the successes of deep learning, we propose a class of neural network-based discrete choice models, called RUMnets, which is inspired by the random utility maximization (RUM) framework. This model formulates the agents' random utility function using the sample average approximation (SAA) method. We show that RUMnets sharply approximate the class of RUM discrete choice models: any model derived from random utility maximization has choice probabilities that can be approximated arbitrarily closely by a RUMnet. Reciprocally, any RUMnet is consistent with the RUM principle. We derive an upper bound on the generalization error of RUMnets fitted on choice data, and gain theoretical insights on their ability to predict choices on new, unseen data depending on critical parameters of the dataset and architecture. By leveraging open-source libraries for neural networks, we find that RUMnets outperform other state-of-the-art choice modeling and machine learning methods by a significant margin on two real-world datasets.
翻译:在深层学习成功之后,我们提出了一组神经网络离散选择模型,称为RUMnets,由随机实用最大化框架(RUMM)启发。这个模型利用样本平均近似(SAA)方法来制定代理商随机实用功能。我们显示,RUMnets非常接近RUM离散选择模型的类别:随机实用最大化的任何模型都有选择概率,而RUMnet可以任意地近似RUMnet。相互之间,任何RUMnet都符合RUM原则。我们对安装在选择数据上的RUMnets的一般错误有一个上限,我们从理论上了解他们根据数据集和结构的关键参数预测新的、看不见数据选择的能力。我们发现,RUMnets利用开放源图书馆来利用神经网络,在两个真实世界数据集上有很大的差幅,超越了其他最先进的选择模型和机器学习方法。