Rapid improvements in the performance of machine learning models have pushed them to the forefront of data-driven decision-making. Meanwhile, the increased integration of these models into various application domains has further highlighted the need for greater interpretability and transparency. To identify problems such as bias, overfitting, and incorrect correlations, data scientists require tools that explain the mechanisms with which these model decisions are made. In this paper we introduce AdViCE, a visual analytics tool that aims to guide users in black-box model debugging and validation. The solution rests on two main visual user interface innovations: (1) an interactive visualization design that enables the comparison of decisions on user-defined data subsets; (2) an algorithm and visual design to compute and visualize counterfactual explanations - explanations that depict model outcomes when data features are perturbed from their original values. We provide a demonstration of the tool through a use case that showcases the capabilities and potential limitations of the proposed approach.
翻译:机器学习模型的快速改进将它们推向了数据驱动决策的最前沿。与此同时,这些模型进一步融入了各种应用领域,进一步突出了提高可解释性和透明度的必要性。为了查明偏见、过度匹配和不正确的关联性等问题,数据科学家需要解释这些模型决策机制的工具。在本文中我们介绍AdVice,这是一个视觉分析工具,旨在指导黑箱模型调试和验证的用户。解决方案依赖于两个主要的视觉用户界面创新:(1)互动可视化设计,便于比较关于用户定义数据子集的决定;(2)算法和视觉设计,用以计算和可视化反事实解释——在数据特征偏离其原始价值时描述模型结果的解释。我们通过一个展示拟议方法的能力和潜在局限性的使用案例来展示该工具。