Recommender systems play a key role in shaping modern web ecosystems. These systems alternate between (1) making recommendations (2) collecting user responses to these recommendations, and (3) retraining the recommendation algorithm based on this feedback. During this process the recommender system influences the user behavioral data that is subsequently used to update it, thus creating a feedback loop. Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior, raising ethical and performance concerns when deploying recommender systems. To address these issues, we propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference and can be applied to any recommendation algorithm that optimizes a training loss. Our main observation is that a recommender system does not suffer from feedback loops if it reasons about causal quantities, namely the intervention distributions of recommendations on user ratings. Moreover, we can calculate this intervention distribution from observational data by adjusting for the recommender system's predictions of user preferences. Using simulated environments, we demonstrate that CAFL improves recommendation quality when compared to prior correction methods.
翻译:推荐人系统在现代网络生态系统的形成中发挥着关键作用。 这些系统在以下两个方面起着交替作用:(1) 建议(2) 收集用户对这些建议的答复,(3) 根据反馈对建议算法进行再培训。 在这个过程中,推荐人系统影响用户行为数据,随后用于更新这些数据,从而形成反馈环。最近的工作表明,反馈环可能会损害建议质量,并使用户行为趋于一致,在部署推荐人系统时提高道德和性能问题。为了解决这些问题,我们建议采用“Causal Readdation for Feelect Loops (CAFL) ” 算法,该算法使用因果推断,可以打破反馈环,并可用于优化培训损失的任何建议算法。我们的主要观察是,如果推荐人系统存在因果数量的原因,即用户评级建议的分配,则不会受到反馈环的影响。此外,我们可以根据推荐人系统对用户偏好性的预测,根据观察数据来计算出干预的分布情况。使用模拟环境,我们证明CAFLL在与先前的纠正方法相比,可以改进建议的质量。