Traditional deep learning interpretability methods which are suitable for model users cannot explain network behaviors at the global level and are inflexible at providing fine-grained explanations. As a solution, concept-based explanations are gaining attention due to their human intuitiveness and their flexibility to describe both global and local model behaviors. Concepts are groups of similarly meaningful pixels that express a notion, embedded within the network's latent space and have commonly been hand-generated, but have recently been discovered by automated approaches. Unfortunately, the magnitude and diversity of discovered concepts makes it difficult to navigate and make sense of the concept space. Visual analytics can serve a valuable role in bridging these gaps by enabling structured navigation and exploration of the concept space to provide concept-based insights of model behavior to users. To this end, we design, develop, and validate ConceptExplainer, a visual analytics system that enables people to interactively probe and explore the concept space to explain model behavior at the instance/class/global level. The system was developed via iterative prototyping to address a number of design challenges that model users face in interpreting the behavior of deep learning models. Via a rigorous user study, we validate how ConceptExplainer supports these challenges. Likewise, we conduct a series of usage scenarios to demonstrate how the system supports the interactive analysis of model behavior across a variety of tasks and explanation granularities, such as identifying concepts that are important to classification, identifying bias in training data, and understanding how concepts can be shared across diverse and seemingly dissimilar classes.
翻译:适合模型用户的传统的深层次学习解释方法不能解释全球一级的网络行为,无法灵活地提供精细的解释。作为一种解决办法,基于概念的解释越来越受到重视,因为其人性的直观性和灵活性,能够灵活地描述全球和地方的模型行为。 概念是同样有意义的像素组,表达一个概念,嵌入网络的潜伏空间,通常是亲手生成的,但最近通过自动化方法发现。 不幸的是,所发现的概念的规模和多样性使得难以对概念空间进行导航和感知。视觉分析可以起到宝贵的作用,弥合这些差距,因为作为解决办法,基于概念的解释有助于对概念空间进行结构化的导航和探索,为用户提供基于概念的模型行为的了解。 为此,我们设计、开发和验证概念探索者,一个视觉分析系统,使人们能够互动地探索和探索概念空间,解释实例/级/全球一级的模型行为。这个系统是通过迭代式的原型结构来开发的,用来应对模型用户在解释深度使用模型方面所面临的许多设计挑战。 视觉分析师,我们如何在深度使用模型方面进行严格的分析,我们如何进行严格的分析,从而验证这些用户行为分析。