We propose a novel framework, termed Fourier-Activated Adapter (FAA), for parameter-efficient fine-tuning of large pre-trained language models. By incorporating random Fourier features into lightweight adapter modules, FAA decomposes intermediate representations into complementary low- and high-frequency components, enabling frequency-aware modulation of semantic information. This design allows the model to selectively emphasize informative frequency bands during adaptation while preserving the representational capacity of the frozen backbone. Extensive experiments on GLUE, E2E NLG, and instruction-tuning benchmarks demonstrate that FAA consistently achieves competitive or superior performance compared to existing parameter-efficient fine-tuning methods, while maintaining low computational and memory overhead. Ablation studies further verify the effectiveness of frequency-aware activation and adaptive weighting mechanisms, highlighting FAA as a robust and efficient approach for post-training large language models.
翻译:我们提出了一种新颖的框架,称为傅里叶激活适配器(FAA),用于大型预训练语言模型的参数高效微调。通过将随机傅里叶特征融入轻量级适配器模块,FAA将中间表示分解为互补的低频与高频分量,从而实现对语义信息的频率感知调制。该设计使得模型能够在适应过程中选择性地强调信息丰富的频带,同时保持冻结主干网络的表征能力。在GLUE、E2E NLG以及指令微调基准上的大量实验表明,与现有的参数高效微调方法相比,FAA始终能取得具有竞争力或更优的性能,同时保持较低的计算与内存开销。消融研究进一步验证了频率感知激活与自适应加权机制的有效性,凸显了FAA作为一种鲁棒且高效的大型语言模型后训练方法。