As we rely more and more on machine learning models for real-life decision-making, being able to understand and trust the predictions becomes ever more important. Local explainer models have recently been introduced to explain the predictions of complex machine learning models at the instance level. In this paper, we propose Local Rule-based Model Interpretability with k-optimal Associations (LoRMIkA), a novel model-agnostic approach that obtains k-optimal association rules from a neighbourhood of the instance to be explained. Compared with other rule-based approaches in the literature, we argue that the most predictive rules are not necessarily the rules that provide the best explanations. Consequently, the LoRMIkA framework provides a flexible way to obtain predictive and interesting rules. It uses an efficient search algorithm guaranteed to find the k-optimal rules with respect to objectives such as confidence, lift, leverage, coverage, and support. It also provides multiple rules which explain the decision and counterfactual rules, which give indications for potential changes to obtain different outputs for given instances. We compare our approach to other state-of-the-art approaches in local model interpretability on three different datasets and achieve competitive results in terms of local accuracy and interpretability.
翻译:由于我们越来越依赖机器学习模型来作出实际生活决策,能够理解和信任预测变得日益重要。最近引入了当地解释模型来解释复杂机器学习模型在实例一级的预测。在本文件中,我们提出了基于当地规则的模式解释与k-最佳协会(LORMIKA)之间的基于当地规则的模式解释,这是一种新颖的模型-不可知性方法,从将要解释的例子的邻里获得最佳联系规则。与文献中其他基于规则的方法相比,我们认为,最预测的规则不一定是提供最佳解释的规则。因此,LORMIKA框架提供了获得预测性和有趣规则的灵活方式。它使用一种有效的搜索算法,保证在信任、提升、杠杆、覆盖和支持等目标方面找到k-最佳规则。它还提供了多种规则,解释决定和反现实规则,这些规则为特定实例获得不同产出的潜在变化提供了指标。我们比较了我们的方法,在本地模型的准确性和可竞争性解释性方面,在三种不同的数据设置方面,我们比较了我们的方法。