In recent years, personalization research has been delving into issues of explainability and fairness. While some techniques have emerged to provide post-hoc and self-explanatory individual recommendations, there is still a lack of methods aimed at uncovering unfairness in recommendation systems beyond identifying biased user and item features. This paper proposes a new algorithm, GNNUERS, which uses counterfactuals to pinpoint user unfairness explanations in terms of user-item interactions within a bi-partite graph. By perturbing the graph topology, GNNUERS reduces differences in utility between protected and unprotected demographic groups. The paper evaluates the approach using four real-world graphs from different domains and demonstrates its ability to systematically explain user unfairness in three state-of-the-art GNN-based recommendation models. This perturbed network analysis reveals insightful patterns that confirm the nature of the unfairness underlying the explanations. The source code and preprocessed datasets are available at https://github.com/jackmedda/RS-BGExplainer
翻译:近年来,个性化研究一直在探讨解释性和公平性问题。虽然已经出现了一些技术,可以提供事后和自我解释的个性化推荐,但仍然缺乏旨在揭示推荐系统中不公平性的方法,超出了识别有偏的用户和项目特征。本文提出了一种新的算法GNNUERS,利用反事实推理在二元图中确定用户不公平性解释,以用户-物品交互的方式。通过扰动图拓扑结构,GNNUERS 减少保护和非保护人口群体之间的效用差别。本文使用来自不同领域的四个实际图形评估方法,并展示了其通过对三种最新的基于 GNN 的推荐模型进行系统解释不公平性的能力。这种扰动网络分析揭示出有意义的模式,从而确认了解释基础中的不公平性的性质。源代码和预处理数据集在https://github.com/jackmedda/RS-BGExplainer上可用。