Algorithms that aid human tasks, such as recommendation systems, are ubiquitous. They appear in everything from social media to streaming videos to online shopping. However, the feedback loop between people and algorithms is poorly understood and can amplify cognitive and social biases (algorithmic confounding), leading to unexpected outcomes. In this work, we explore algorithmic confounding in collaborative filtering-based recommendation algorithms through teacher-student learning simulations. Namely, a student collaborative filtering-based model, trained on simulated choices, is used by the recommendation algorithm to recommend items to agents. Agents might choose some of these items, according to an underlying teacher model, with new choices then fed back into the student model as new training data (approximating online machine learning). These simulations demonstrate how algorithmic confounding produces erroneous recommendations which in turn lead to instability, i.e., wide variations in an item's popularity between each simulation realization. We use the simulations to demonstrate a novel approach to training collaborative filtering models that can create more stable and accurate recommendations. Our methodology is general enough that it can be extended to other socio-technical systems in order to better quantify and improve the stability of algorithms. These results highlight the need to account for emergent behaviors from interactions between people and algorithms.
翻译:帮助人类任务( 如建议系统) 的解算法是无处不在的。 它们出现在从社交媒体到流视频到在线购物的方方面面。 但是, 人和算法之间的反馈回路很少被理解, 并且能够扩大认知和社会偏见( 相形相见), 导致出乎意料的结果。 在这项工作中, 我们通过师生学习模拟, 探索协作过滤基于过滤的推荐算法的共算法, 即通过教师- 学生学习模拟, 探索协作过滤法的共算法。 即学生协作过滤模型, 受过模拟选择培训, 被推荐算法用于向代理商推荐项目。 代理人可能选择其中的一些项目, 依据一个基本的教师模型, 新的选择随后反馈到学生模型中作为新的培训数据( 类似于在线机器学习 ) 。 这些模拟表明, 算法会产生错误的建议, 反过来导致不稳定性, 也就是说, 每项模拟的普及程度差异。 我们用模拟来展示一种新颖的方法来培训合作过滤模型, 能够创造更稳定和准确的建议。 我们的方法足够地强调, 将稳定性和演算方法可以扩展到其他社会- 的演算结果。