The relevance of machine learning (ML) in our daily lives is closely intertwined with its explainability. Explainability can allow end-users to have a transparent and humane reckoning of a ML scheme's capability and utility. It will also foster the user's confidence in the automated decisions of a system. Explaining the variables or features to explain a model's decision is a need of the present times. We could not really find any work, which explains the features on the basis of their class-distinguishing abilities (specially when the real world data are mostly of multi-class nature). In any given dataset, a feature is not equally good at making distinctions between the different possible categorizations (or classes) of the data points. In this work, we explain the features on the basis of their class or category-distinguishing capabilities. We particularly estimate the class-distinguishing capabilities (scores) of the variables for pair-wise class combinations. We validate the explainability given by our scheme empirically on several real-world, multi-class datasets. We further utilize the class-distinguishing scores in a latent feature context and propose a novel decision making protocol. Another novelty of this work lies with a \emph{refuse to render decision} option when the latent variable (of the test point) has a high class-distinguishing potential for the likely classes.
翻译:机器学习(ML)在我们日常生活中的关联性与它的可解释性紧密交织在一起。 解释性可以允许终端用户对 ML 计划的能力和用途进行透明和人道的计算。 它还会增强用户对系统自动决定的信心。 解释用于解释模型决定的变数或特性是当前时代的需要。 我们无法真正找到任何工作, 以其等级差异能力( 特别是当真实世界数据大多为多级性质时) 来解释其特征。 在任何给定的数据设置中, 一种特性对于数据点的不同可能分类( 或类别) 进行透明而人道的计算。 在这项工作中, 我们根据用户的类别或类别差异性能力来解释其特性。 我们特别估计了等级差异能力( 分化能力) 用于双类组合的变量组合。 我们验证了我们计划在几个真实世界、 多级数据集中的经验解释性。 我们进一步利用类分级的特性来区分数据点, 将数据点( 或类分解的层次 ) 将一个潜在的高层次评分数变成一个潜在特性 。