State-of-the-art recommender system (RS) mostly rely on complex deep neural network (DNN) model structure, which makes it difficult to provide explanations along with RS decisions. Previous researchers have proved that providing explanations along with recommended items can help users make informed decisions and improve their trust towards the uninterpretable blackbox system. In model-agnostic explainable recommendation, system designers deploy a separate explanation model to take as input from the decision model, and generate explanations to meet the goal of persuasiveness. In this work, we explore the task of ranking textual rationales (supporting evidences) for model-agnostic explainable recommendation. Most of existing rationales ranking algorithms only utilize the rationale IDs and interaction matrices to build latent factor representations; and the semantic information within the textual rationales are not learned effectively. We argue that such design is suboptimal as the important semantic information within the textual rationales may be used to better profile user preferences and item features. Seeing this gap, we propose a model named Semantic-Enhanced Bayesian Personalized Explanation Ranking (SE-BPER) to effectively combine the interaction information and semantic information. SE-BPER first initializes the latent factor representations with contextualized embeddings generated by transformer model, then optimizes them with the interaction data. Extensive experiments show that such methodology improves the rationales ranking performance while simplifying the model training process (fewer hyperparameters and faster convergence). We conclude that the optimal way to combine semantic and interaction information remains an open question in the task of rationales ranking.
翻译:高级推荐人系统(RS)主要依赖复杂的深层神经网络模型结构,这使得很难在RS决定的同时提供解释。 以前的研究人员已经证明,提供解释和推荐项目有助于用户做出知情的决定,并增进他们对无法解释的黑盒系统的信任。 在可解释的模型建议中,系统设计人将一个单独的解释模型作为决定模型的输入,并为实现说服力目标做出解释。 在这项工作中,我们探索了为模型-不可知性解释性建议排序文本理由(支持证据)的任务。 多数现有原理的排序算法仅利用理由识别和互动矩阵来建立潜在要素表达; 文本原理内含的语义信息没有被有效地学习。 我们认为,这种设计不完美,因为文本原理内的重要语义信息可能被用来更好地描述用户偏好和项目特性。 见到这一差距,我们仍然建议一个名为Semantic- Enhancid Bayesian 个人化解释性解释性解释性解释性陈述建议。 我们建议一个名为SE-B-Bpareroial 解释性排序方法, 将Seristal imstrational eximal eximational exmissational exmissational expeal expeactal exmissional exmational expeal exmational