Deep learning models are being increasingly applied to imbalanced data in high stakes fields such as medicine, autonomous driving, and intelligence analysis. Imbalanced data compounds the black-box nature of deep networks because the relationships between classes may be highly skewed and unclear. This can reduce trust by model users and hamper the progress of developers of imbalanced learning algorithms. Existing methods that investigate imbalanced data complexity are geared toward binary classification, shallow learning models and low dimensional data. In addition, current eXplainable Artificial Intelligence (XAI) techniques mainly focus on converting opaque deep learning models into simpler models (e.g., decision trees) or mapping predictions for specific instances to inputs, instead of examining global data properties and complexities. Therefore, there is a need for a framework that is tailored to modern deep networks, that incorporates large, high dimensional, multi-class datasets, and uncovers data complexities commonly found in imbalanced data (e.g., class overlap, sub-concepts, and outlier instances). We propose a set of techniques that can be used by both deep learning model users to identify, visualize and understand class prototypes, sub-concepts and outlier instances; and by imbalanced learning algorithm developers to detect features and class exemplars that are key to model performance. Our framework also identifies instances that reside on the border of class decision boundaries, which can carry highly discriminative information. Unlike many existing XAI techniques which map model decisions to gray-scale pixel locations, we use saliency through back-propagation to identify and aggregate image color bands across entire classes. Our framework is publicly available at \url{https://github.com/dd1github/XAI_for_Imbalanced_Learning}
翻译:深度学习模型正在越来越多地应用于高利贷领域(如医学、自主驱动和情报分析)的不平衡数据。 平衡数据使深层网络的黑箱性质更加复杂,因为各等级之间的关系可能高度偏斜和模糊。 这可以降低模型用户的信任,阻碍不平衡学习算法开发者的进展。 现有的调查不平衡数据复杂性的方法正在转向二元分类、浅度学习模型和低维数据。 此外, 当前的可移植的人工智能(XAI)技术主要侧重于将不透明的深层学习模型转换为更简单的模型(如决策树)或为投入的具体实例绘制预测的颜色颜色,而不是检查全球数据属性和复杂性。 因此,有必要建立一个适合现代深层网络的框架,纳入大型、高维度、多级数据集,并揭示数据不平衡数据中常见的数据复杂性(如:阶级模式重叠、亚利平衡的亚利佛利克斯(X)的亚利克斯(X)级图解解)框架。 我们提出一套技术可以被深学习模型的用户用来识别、可视化和理解的内分级分析性结构框架, 也通过高级分析模型和AILA(I)级模型, 级模型测试的模型, 级的模型, 测试,我们用来探测到高级模型的高级模型的模型和历史模型, 测试,我们用来探测到高级的模型的底局的模型, 级的模型的模型, 级模型的模型, 测试。