Recent advancements in interpretability research made transformer language models more transparent. This progress led to a better understanding of their inner workings for toy and naturally occurring models. However, how these models internally process sentiment changes has yet to be sufficiently answered. In this work, we introduce a new interpretability tool called PCP ablation, where we replace modules with low-rank matrices based on the principal components of their activations, reducing model parameters and their behavior to essentials. We demonstrate PCP ablations on MLP and attention layers in backdoored toy, backdoored large, and naturally occurring models. We determine MLPs as most important for the backdoor mechanism and use this knowledge to remove, insert, and modify backdoor mechanisms with engineered replacements via PCP ablation.
翻译:最近在可解释性研究方面的进步使变压器语言模型更加透明。这一进展使人们更好地了解了这些模型对玩具和自然发生的模型的内部作用。然而,这些模型内部处理情绪变化的方式还有待充分解答。在这项工作中,我们引入了一个新的可解释性工具,称为五氯苯酚活化作用(PCP ablation),我们在此将模块替换为低级矩阵,基于其活化的主要组成部分,减少了模型参数及其行为到基本物质。我们展示了五氯苯酚在低端玩具、后门大型和自然发生的模型中对低层的渗透和关注。我们确定MLP是后门机制中最重要的,并利用这一知识通过五氯苯酚除去、插入和修改后门机制。</s>