This paper studies the Random Utility Model (RUM) in environments where the decision maker is imperfectly informed about the payoffs associated to each of the alternatives he faces. By embedding the RUM into an online decision problem, we make four contributions. First, we propose a gradient-based learning algorithm and show that a large class of RUMs are Hannan consistent (\citet{Hahn1957}); that is, the average difference between the expected payoffs generated by a RUM and that of the best fixed policy in hindsight goes to zero as the number of periods increase. Second, we show that the class of Generalized Extreme Value (GEV) models can be implemented with our learning algorithm. Examples in the GEV class include the Nested Logit, Ordered, and Product Differentiation models among many others. Third, we show that our gradient-based algorithm is the dual, in a convex analysis sense, of the Follow the Regularized Leader (FTRL) algorithm, which is widely used in the Machine Learning literature. Finally, we discuss how our approach can incorporate recency bias and be used to implement prediction markets in general environments.
翻译:本文在决策者不完全了解与他所面临的各种替代方案相关的回报的环境下研究随机实用模型(RUM) 。 通过将 RUM 嵌入在线决策问题, 我们做出四项贡献。 首先, 我们提出基于梯度的学习算法, 并显示一大批RUM都是汉南一致的(\citet{Hahn1957}); 也就是说, 由RUM 产生的预期收益与后视最佳固定政策的预期回报之间的平均差异随着时间的增加而变为零。 其次, 我们展示了普通极端值(GEV) 模型的类别可以与我们的学习算法一起实施。 GEV 类中的例子包括Nested Logit、有秩序的和产品差异模型等等。 第三, 我们显示,我们的梯度算法是遵循正规领导人(FTRL) 算法的双重性, 后者在机器学习文献中广泛使用。 最后, 我们讨论我们的方法如何纳入耐久性偏差, 并用于在一般环境中实施预测市场。