Finetuning (pretrained) language models is a standard approach for updating their internal parametric knowledge and specializing them to new tasks and domains. However, the corresponding model weight changes ("weight diffs") are not generally interpretable. While inspecting the finetuning dataset can give a sense of how the model might have changed, these datasets are often not publicly available or are too large to work with directly. Towards the goal of comprehensively understanding weight diffs in natural language, we introduce Diff Interpretation Tuning (DIT), a method that trains models to describe their own finetuning-induced modifications. Our approach uses synthetic, labeled weight diffs to train a DIT-adapter, which can be applied to a compatible finetuned model to make it describe how it has changed. We demonstrate in two proof-of-concept settings (reporting hidden behaviors and summarizing finetuned knowledge) that our method enables models to describe their finetuning-induced modifications using accurate natural language descriptions.
翻译:微调(预训练)语言模型是更新其内部参数知识并使其适应新任务和领域的标准方法。然而,相应的模型权重变化(“权重差异”)通常不具备可解释性。虽然检查微调数据集可以大致了解模型可能发生的变化,但这些数据集往往不公开或规模过大而难以直接处理。为全面理解自然语言处理中的权重差异,我们提出了差异解释调优(DIT)方法,该方法训练模型描述自身因微调而产生的修改。我们的方法使用带标注的合成权重差异来训练DIT适配器,该适配器可应用于兼容的微调模型,使其能够描述自身的变化。我们在两个概念验证场景(报告隐藏行为与总结微调后知识)中证明,该方法能使模型使用准确的自然语言描述其微调引发的修改。