In recent years, personalization research has been delving into issues of explainability and fairness. While some techniques have emerged to provide post-hoc and self-explanatory individual recommendations, there is still a lack of methods aimed at uncovering unfairness in recommendation systems beyond identifying biased user and item features. This paper proposes a new algorithm, GNNUERS, which uses counterfactuals to pinpoint user unfairness explanations in terms of user-item interactions within a bi-partite graph. By perturbing the graph topology, GNNUERS reduces differences in utility between protected and unprotected demographic groups. The paper evaluates the approach using four real-world graphs from different domains and demonstrates its ability to systematically explain user unfairness in three state-of-the-art GNN-based recommendation models. This perturbed network analysis reveals insightful patterns that confirm the nature of the unfairness underlying the explanations. The source code and preprocessed datasets are available at https://github.com/jackmedda/RS-BGExplainer
翻译:近年来,个性化研究一直在探索可解释性和公平性问题。尽管一些技术已经出现,提供了事后的和自说明的个性化推荐,但仍然缺乏方法来揭示推荐系统中的不公平性,超出了仅仅识别带有偏见的用户和项目特征。本文提出了一种新算法GNNUERS,它使用反事实推理在双向图的用户-项目交互中找出用户不公平性的解释。通过扰动图形拓扑,GNNUERS 减少了受保护和未受保护人口统计学群体之间效用的差异。本文使用来自不同领域的四个现实世界图表评估了该方法,并展示了它在三种最先进的基于 GNN 的推荐模型中系统地解释用户不公平性的能力。这种扰动网络分析揭示了有启示性的模式,证实了所解释的不公平性的性质。源代码和预处理数据集在 https://github.com/jackmedda/RS-BGExplainer 上可用。