Can language models (LMs) learn to faithfully describe their internal computations? Are they better able to describe themselves than other models? We study the extent to which LMs' privileged access to their own internals can be leveraged to produce new techniques for explaining their behavior. Using existing interpretability techniques as a source of ground truth, we fine-tune LMs to generate natural language descriptions of (1) the information encoded by LM features, (2) the causal structure of LMs' internal activations, and (3) the influence of specific input tokens on LM outputs. When trained with only tens of thousands of example explanations, explainer models exhibit non-trivial generalization to new queries. This generalization appears partly attributable to explainer models' privileged access to their own internals: using a model to explain its own computations generally works better than using a *different* model to explain its computations (even if the other model is significantly more capable). Our results suggest not only that LMs can learn to reliably explain their internal computations, but that such explanations offer a scalable complement to existing interpretability methods. Code and data at https://github.com/TransluceAI/introspective-interp
翻译:语言模型(LMs)能否学会准确描述其内部计算过程?它们是否比其他模型更擅长描述自身?我们研究了在多大程度上可以利用语言模型对其内部状态的优先访问权,来开发解释其行为的新技术。以现有可解释性技术作为真实基准,我们通过微调语言模型,使其生成关于以下三方面的自然语言描述:(1)语言模型特征所编码的信息,(2)语言模型内部激活的因果结构,以及(3)特定输入词元对语言模型输出的影响。当仅使用数万个示例解释进行训练时,解释器模型对新查询展现出非平凡的泛化能力。这种泛化能力部分可归因于解释器模型对其自身内部状态的优先访问权:使用模型解释其自身计算过程的效果通常优于使用*不同*模型解释其计算过程(即使另一模型的能力明显更强)。我们的研究结果表明,语言模型不仅能够学会可靠地解释其内部计算过程,而且此类解释为现有可解释性方法提供了可扩展的补充。代码与数据详见 https://github.com/TransluceAI/introspective-interp