The number of information systems (IS) studies dealing with explainable artificial intelligence (XAI) is currently exploding as the field demands more transparency about the internal decision logic of machine learning (ML) models. However, most techniques subsumed under XAI provide post-hoc-analytical explanations, which have to be considered with caution as they only use approximations of the underlying ML model. Therefore, our paper investigates a series of intrinsically interpretable ML models and discusses their suitability for the IS community. More specifically, our focus is on advanced extensions of generalized additive models (GAM) in which predictors are modeled independently in a non-linear way to generate shape functions that can capture arbitrary patterns but remain fully interpretable. In our study, we evaluate the prediction qualities of five GAMs as compared to six traditional ML models and assess their visual outputs for model interpretability. On this basis, we investigate their merits and limitations and derive design implications for further improvements.
翻译:涉及可解释的人工智能(XAI)的信息系统(IS)研究的数量目前正在急剧增加,因为实地要求机器学习模型的内部决定逻辑更加透明,然而,在XAI下所包含的大多数技术都提供了后热分析解释,必须谨慎考虑,因为它们只使用基本ML模型的近似值。因此,我们的文件调查了一系列内在可解释的ML模型,并讨论了这些模型是否适合IS群体。更具体地说,我们的重点是通用添加模型(GAM)的高级扩展,在这种模型中,预测器以非线性方式独立地建模,生成形状功能,可以捕捉任意模式,但仍然完全可以解释。我们的研究中,我们评估了五个GAMS的预测质量,与六个传统的ML模型相比,并评估其视觉输出值,以便模型可以解释。在此基础上,我们调查这些模型的优点和局限性,并对进一步的改进产生设计影响。