User-solicited ratings systems in online marketplaces suffer from a cold-start problem: new products have very few ratings, which may capture overly pessimistic or optimistic views of the proper rating of that product. This could lead to platforms promoting new products that are actually low quality, or cause high-quality products to get discouraged and exit the market early. In this paper, we address this cold-start problem through an approach that softens early reviews, interpolating them against a background distribution of product ratings on the platform. We instantiate this approach using a parametric empirical Bayes model, which weighs reviews of a new product against expectations of what that product's quality ought to be based on previous products that existed in the same market, especially when the number of reviews for that new product are low. We apply our method to real-world data drawn from Amazon as well as synthetically generated data. In aggregate, parametric empirical Bayes performs better on predicting future ratings, especially when working with few reviews. However, in these same low-data settings, our method performs worse on individual products that are outliers within the population.
翻译:在线市场用户索取的评级系统面临一个冷却的启动问题:新产品只有很少的评级,这可能导致对该产品正确评级过于悲观或乐观的看法;这可能导致促进实际质量低的新产品的平台,或导致高质量产品获得抑制并提前退出市场。在本文中,我们通过软化早期审查的方法来解决这一冷淡的启动问题,在平台上产品评级的背景分布中进行插插插。我们利用一个参数性经验性贝斯模型对这种做法进行回馈,该模型将新产品的审查与该产品质量应当基于同一市场以前产品的期望进行权衡,特别是在该新产品审查数量低的情况下。我们采用我们的方法处理从亚马逊收集的实际情况数据以及合成生成的数据。总体而言,参数性经验性海湾在预测未来评级方面表现更好,特别是在与少数审查一起工作时。然而,在这类低度数据环境下,我们的方法对人口中较落后的单个产品表现更差。