In recent years, the field of recommendation systems has attracted increasing attention to developing predictive models that provide explanations of why an item is recommended to a user. The explanations can be either obtained by post-hoc diagnostics after fitting a relatively complex model or embedded into an intrinsically interpretable model. In this paper, we propose the explainable recommendation systems based on a generalized additive model with manifest and latent interactions (GAMMLI). This model architecture is intrinsically interpretable, as it additively consists of the user and item main effects, the manifest user-item interactions based on observed features, and the latent interaction effects from residuals. Unlike conventional collaborative filtering methods, the group effect of users and items are considered in GAMMLI. It is beneficial for enhancing the model interpretability, and can also facilitate the cold-start recommendation problem. A new Python package GAMMLI is developed for efficient model training and visualized interpretation of the results. By numerical experiments based on simulation data and real-world cases, the proposed method is shown to have advantages in both predictive performance and explainable recommendation.
翻译:近些年来,建议系统领域吸引了越来越多的注意力来开发预测模型,解释为什么向用户推荐某个项目的原因;解释可以通过在安装相对复杂的模型后通过热后诊断获得,或者嵌入一个内在可解释的模式;在本文件中,我们提议基于具有明显和潜在相互作用的通用添加模型(GAMMLI)的可解释建议系统;这一模型结构本质上是可以解释的,因为它由用户和项目的主要效应、基于观察到的特征的明显用户-项目互动以及残余物的潜在互动效应组成;与传统的合作过滤方法不同,用户和项目的集体效应在GAMLI中受到考虑;它有利于加强模型可解释性,并可促进冷动的建议问题;为高效的模型培训和对结果的可视化解释,开发了一个新的Python包GAMLI,根据模拟数据和真实世界案例进行的数字实验,显示拟议的方法在预测性业绩和可解释性建议方面具有优势。