Empirical evidence shows that deep vision networks often represent concepts as directions in latent space with concept information written along directional components in the vector representation of the input. However, the mechanism to encode (write) and decode (read) concept information to and from vector representations is not directly accessible as it constitutes a latent mechanism that naturally emerges from the training process of the network. Recovering this mechanism unlocks significant potential to open the black-box nature of deep networks, enabling understanding, debugging, and improving deep learning models. In this work, we propose an unsupervised method to recover this mechanism. For each concept, we explain that under the hypothesis of linear concept representations, this mechanism can be implemented with the help of two directions: the first facilitating encoding of concept information and the second facilitating decoding. Unlike prior matrix decomposition, autoencoder, or dictionary learning methods that rely on feature reconstruction, we propose a new perspective: decoding directions are identified via directional clustering of activations, and encoding directions are estimated with signal vectors under a probabilistic view. We further leverage network weights through a novel technique, Uncertainty Region Alignment, which reveals interpretable directions affecting predictions. Our analysis shows that (a) on synthetic data, our method recovers ground-truth direction pairs; (b) on real data, decoding directions map to monosemantic, interpretable concepts and outperform unsupervised baselines; and (c) signal vectors faithfully estimate encoding directions, validated via activation maximization. Finally, we demonstrate applications in understanding global model behavior, explaining individual predictions, and intervening to produce counterfactuals or correct errors.
翻译:经验证据表明,深度视觉网络通常将概念表示为潜在空间中的方向,概念信息沿着输入向量表示中的方向分量进行编码。然而,将概念信息编码(写入)到向量表示中以及从向量表示中解码(读取)的机制无法直接访问,因为这是一种从网络训练过程中自然涌现的潜在机制。恢复这一机制具有巨大潜力,能够打开深度网络的黑箱性质,从而实现对深度学习模型的理解、调试和改进。在本工作中,我们提出了一种无监督方法来恢复这一机制。对于每个概念,我们解释在线性概念表示的假设下,该机制可以通过两个方向实现:第一个方向促进概念信息的编码,第二个方向促进解码。与先前依赖特征重建的矩阵分解、自编码器或字典学习方法不同,我们提出了一种新视角:解码方向通过激活的方向聚类进行识别,编码方向则在概率视角下通过信号向量进行估计。我们进一步通过一种新技术——不确定性区域对齐——利用网络权重,揭示了影响预测的可解释方向。我们的分析表明:(a)在合成数据上,我们的方法能够恢复真实的方向对;(b)在真实数据上,解码方向映射到单义、可解释的概念,并且优于无监督基线方法;(c)信号向量能够准确估计编码方向,这一点通过激活最大化得到了验证。最后,我们展示了该方法在理解全局模型行为、解释个体预测以及通过干预生成反事实或纠正错误等方面的应用。