State-of-the-art recommender systems have the ability to generate high-quality recommendations, but usually cannot provide intuitive explanations to humans due to the usage of black-box prediction models. The lack of transparency has highlighted the critical importance of improving the explainability of recommender systems. In this paper, we propose to extract causal rules from the user interaction history as post-hoc explanations for the black-box sequential recommendation mechanisms, whilst maintain the predictive accuracy of the recommendation model. Our approach firstly achieves counterfactual examples with the aid of a perturbation model, and then extracts personalized causal relationships for the recommendation model through a causal rule mining algorithm. Experiments are conducted on several state-of-the-art sequential recommendation models and real-world datasets to verify the performance of our model on generating causal explanations. Meanwhile, We evaluate the discovered causal explanations in terms of quality and fidelity, which show that compared with conventional association rules, causal rules can provide personalized and more effective explanations for the behavior of black-box recommendation models.
翻译:最先进的推荐人系统有能力提出高质量的建议,但由于使用黑盒预测模型,通常无法向人类提供直觉解释。 缺乏透明度突出表明了改进推荐人系统解释的至关重要性。 在本文中,我们提议从用户互动史中提取因果规则,作为黑盒顺序建议机制的选用后解释,同时保持建议模式的预测准确性。 我们的方法首先在扰动模型的帮助下取得反事实实例,然后通过因果规则挖掘算法为建议模式提取个性化因果关系。 实验是在数个最先进的顺序建议模型和真实世界数据集上进行的,以核实我们模型在产生因果解释方面的性能。 同时,我们评估在质量和忠诚方面发现的因果解释,这些解释表明与常规联系规则相比,因果规则能够为黑盒建议模型的行为提供个性化和更有效的解释。