With the prevalence of deep learning based embedding approaches, recommender systems have become a proven and indispensable tool in various information filtering applications. However, many of them remain difficult to diagnose what aspects of the deep models' input drive the final ranking decision, thus, they cannot often be understood by human stakeholders. In this paper, we investigate the dilemma between recommendation and explainability, and show that by utilizing the contextual features (e.g., item reviews from users), we can design a series of explainable recommender systems without sacrificing their performance. In particular, we propose three types of explainable recommendation strategies with gradual change of model transparency: whitebox, graybox, and blackbox. Each strategy explains its ranking decisions via different mechanisms: attention weights, adversarial perturbations, and counterfactual perturbations. We apply these explainable models on five real-world data sets under the contextualized setting where users and items have explicit interactions. The empirical results show that our model achieves highly competitive ranking performance, and generates accurate and effective explanations in terms of numerous quantitative metrics and qualitative visualizations.
翻译:随着基于深层次学习的嵌入方法的普及,推荐人系统已成为各种信息过滤应用程序中经证明和不可或缺的工具,然而,其中许多系统仍然难以判断深层次模型投入的哪些方面推动最后排名决定,因此,人类利益攸关方往往无法理解这些系统。在本文件中,我们调查建议与解释性之间的两难困境,并表明通过利用背景特征(例如用户的项目审查),我们可以设计一系列可以解释的建议系统,而不会牺牲其性能。特别是,我们提出了三种可以解释的建议战略,逐步改变模式透明度:白箱、灰盒和黑盒。每个战略都通过不同机制解释其排名决定:注意权重、对抗性扰动和反事实扰动。我们在用户和项目有明确互动的背景环境下对五个真实世界数据集应用了这些可解释的模式。经验结果表明,我们的模型取得了高度竞争性的排序业绩,并在大量量化指标和定性可视化方面产生了准确和有效的解释。