Explaining to users why some items are recommended is critical, as it helps users to make better decisions, increase their satisfaction, and gain their trust in recommender systems (RS). However, existing explainable RS usually consider explanations as side outputs of the recommendation model, which has two problems: (1) it is difficult to evaluate the produced explanations because they are usually model-dependent, and (2) as a result, the possible impacts of those explanations are less investigated. To address the evaluation problem, we propose learning to explain for explainable recommendation. The basic idea is to train a model that selects explanations from a collection as a ranking-oriented task. A great challenge, however, is that the sparsity issue in the user-item-explanation data would be severer than that in traditional user-item relation data, since not every user-item pair can associate with multiple explanations. To mitigate this issue, we propose to perform two sets of matrix factorization by considering the ternary relationship as two groups of binary relationships. To further investigate the impacts of explanations, we extend the traditional item ranking of recommendation to an item-explanation joint-ranking formalization. We study if purposely selecting explanations could achieve certain learning goals, e.g., in this paper, improving the recommendation performance. Experiments on three large datasets verify our solution's effectiveness on both item recommendation and explanation ranking. In addition, our user-item-explanation datasets open up new ways of modeling and evaluating recommendation explanations. To facilitate the development of explainable RS, we will make our datasets and code publicly available.
翻译:向用户解释为什么建议某些项目至关重要,因为这有助于用户作出更好的决定,提高满意度,并赢得他们对推荐系统的信任。然而,现有的可解释的RS通常将解释视为建议模式的侧面产出,这有两个问题:(1) 很难评价所提出的解释,因为通常依赖模型,(2) 因此,这些解释可能产生的影响调查较少。为了解决评价问题,我们建议学习解释,以便解释可解释的建议。基本想法是培训一个模型,从收集中选择解释解释规则,将其作为一项面向排名的任务。然而,一个巨大的挑战是,用户-项目解释数据中的夸大问题将比传统的用户-项目关系数据中更严重,因为并非每个用户-项目配对都能够与多种解释联系起来。为了减轻这一问题,我们建议采用两套矩阵因子,将定型关系视为两组可解释的模型关系。为了进一步调查解释,我们将传统的建议项目排序扩展为电子排序联合格式化。然而,一个巨大的挑战是,用户-项目解释的偏重度问题比传统用户-项目相关数据关系数据的偏重度问题。我们的研究,如果通过实验性选择数据解释可以实现某种大规模数据排序,那么,我们的数据解释,那么,我们的数据解释,就会学习某些数据等级。