Parameter-efficient fine-tuning has become the dominant paradigm for adapting large language models to downstream tasks. Low-rank adaptation methods such as LoRA operate under the assumption that task-relevant weight updates reside in a low-rank subspace, yet this subspace is learned implicitly from data in a black-box manner, offering no interpretability or direct control. We hypothesize that this difficulty stems from polysemanticity--individual dimensions encoding multiple entangled concepts. To address this, we leverage pre-trained Sparse Autoencoders (SAEs) to identify task-relevant features in a disentangled feature space, then construct an explicit, interpretable low-rank subspace to guide adapter initialization. We provide theoretical analysis proving that under monosemanticity assumptions, SAE-based subspace identification achieves arbitrarily small recovery error, while direct identification in polysemantic space suffers an irreducible error floor. On safety alignment, our method achieves up to 99.6% safety rate--exceeding full fine-tuning by 7.4 percentage points and approaching RLHF-based methods--while updating only 0.19-0.24% of parameters. Crucially, our method provides interpretable insights into the learned alignment subspace through the semantic grounding of SAE features. Our work demonstrates that incorporating mechanistic interpretability into the fine-tuning process can simultaneously improve both performance and transparency.
翻译:参数高效微调已成为将大语言模型适配至下游任务的主导范式。低秩适配方法(如LoRA)基于任务相关的权重更新存在于低秩子空间的假设运行,然而该子空间以黑盒方式从数据中隐式学习,缺乏可解释性或直接控制。我们假设该困难源于多义性——即单个维度编码多个纠缠概念。为解决此问题,我们利用预训练的稀疏自编码器在解耦特征空间中识别任务相关特征,进而构建显式可解释的低秩子空间以指导适配器初始化。我们通过理论分析证明:在单义性假设下,基于SAE的子空间识别可实现任意小的恢复误差,而多义性空间中的直接识别则存在不可约的误差下限。在安全对齐任务中,本方法实现了高达99.6%的安全率——超越全参数微调7.4个百分点,并接近基于RLHF的方法——同时仅更新0.19-0.24%的参数。关键的是,本方法通过SAE特征的语义基础为学习到的对齐子空间提供了可解释的洞察。我们的工作表明,将机制可解释性融入微调过程能够同时提升性能与透明度。