Explainable Recommendation has attracted a lot of attention due to a renewed interest in explainable artificial intelligence. In particular, post-hoc approaches have proved to be the most easily applicable ones to increasingly complex recommendation models, which are then treated as black-boxes. The most recent literature has shown that for post-hoc explanations based on local surrogate models, there are problems related to the robustness of the approach itself. This consideration becomes even more relevant in human-related tasks like recommendation. The explanation also has the arduous task of enhancing increasingly relevant aspects of user experience such as transparency or trustworthiness. This paper aims to show how the characteristics of a classical post-hoc model based on surrogates is strongly model-dependent and does not prove to be accountable for the explanations generated.
翻译:由于对可解释的人工智能重新感兴趣,可解释的建议引起了许多关注,特别是,对日益复杂的建议模式,采取后热方法被证明是最容易适用的,然后被当作黑箱处理,最近的一些文献表明,对于基于当地代用模型的热后解释,存在与这种方法本身的稳健性有关的问题,这种考虑在像建议这样的与人类有关的任务中变得更加重要。这种解释还具有一项艰巨的任务,即加强用户经验中越来越相关的方面,例如透明度或可信度。本文旨在说明基于代用模型的典型后热后模型的特点如何非常依赖模型,并且证明对所产生的解释不负责。