Pretrained large language models (LLMs) are able to solve a wide variety of tasks through transfer learning. Various explainability methods have been developed to investigate their decision making process. TracIn (Pruthi et al., 2020) is one such gradient-based method which explains model inferences based on the influence of training examples. In this paper, we explore the use of TracIn to improve model performance in the parameter-efficient tuning (PET) setting. We develop conversational safety classifiers via the prompt-tuning PET method and show how the unique characteristics of the PET regime enable TracIn to identify the cause for certain misclassifications by LLMs. We develop a new methodology for using gradient-based explainability techniques to improve model performance, G-BAIR: gradient-based automated iterative recovery. We show that G-BAIR can recover LLM performance on benchmarks after manually corrupting training labels. This suggests that influence methods like TracIn can be used to automatically perform data cleaning, and introduces the potential for interactive debugging and relabeling for PET-based transfer learning methods.
翻译:受过训练的大型语言模型(LLMS)能够通过转让学习解决各种各样的任务。已经开发了各种解释方法来调查其决策过程。TracIn(Prutihi等人,2020年)是这种基于梯度的方法,它解释基于培训实例的影响的模型推论。在本文中,我们探索TracIn的使用情况,以提高参数效率调控(PET)设置中的模型性能。我们通过快速调控 PET 方法开发了谈话安全分类器,并展示了PET 系统的独特性能如何使TracIn能够查明LLMs某些分类错误的原因。我们开发了一种使用基于梯度的解释技术改进模型性能的新方法,G-BAIR:基于梯度的自动迭接恢复。我们表明,G-BAIR在人工损坏培训标签后可以恢复基准的LM性能。这表明,像TracIn这样的影响方法可以用来自动进行数据清理,并且为基于PET的转移方法提供互动调试和重新标签的可能性。