Though recommender systems are defined by personalization, recent work has shown the importance of additional, beyond-accuracy objectives, such as fairness. Because users often expect their recommendations to be purely personalized, these new algorithmic objectives must be communicated transparently in a fairness-aware recommender system. While explanation has a long history in recommender systems research, there has been little work that attempts to explain systems that use a fairness objective. Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation. Here, we consider user perspectives of fairness-aware recommender systems and techniques for enhancing their transparency. We describe the results of an exploratory interview study that investigates user perceptions of fairness, recommender systems, and fairness-aware objectives. We propose three features -- informed by the needs of our participants -- that could improve user understanding of and trust in fairness-aware recommender systems.
翻译:虽然推荐人系统是通过个性化来定义的,但最近的工作表明额外、超越准确性的目标的重要性,例如公平性。由于用户往往期望他们的建议纯粹是个性化的,这些新的算法目标必须在公平意识推荐人系统中以透明方式传达。虽然在推荐人系统研究方面解释有很长的历史,但很少试图解释使用公平目标的系统。尽管大赦国际其他分支以前的工作探索了使用解释作为提高公平性的工具,但这项工作并没有侧重于建议。在这里,我们考虑了公平意识推荐人系统及提高透明度的技术的用户观点。我们描述了探索性访谈研究的结果,调查用户对公平性、推荐人系统和公平意识目标的看法。我们提出了三个特征 -- -- 根据我们参与者的需要 -- -- 可以提高用户对公平意识推荐人系统的了解和信任。