Parameter-efficient fine-tuning approaches have recently garnered a lot of attention. Having considerably lower number of trainable weights, these methods can bring about scalability and computational effectiveness. In this paper, we look for optimal sub-networks and investigate the capability of different transformer modules in transferring knowledge from a pre-trained model to a downstream task. Our empirical results suggest that every transformer module in BERT can act as a winning ticket: fine-tuning each specific module while keeping the rest of the network frozen can lead to comparable performance to the full fine-tuning. Among different modules, LayerNorms exhibit the best capacity for knowledge transfer with limited trainable weights, to the extent that, with only 0.003% of all parameters in the layer-wise analysis, they show acceptable performance on various target tasks. On the reasons behind their effectiveness, we argue that their notable performance could be attributed to their high-magnitude weights compared to that of the other modules in the pre-trained BERT.
翻译:参数效率微调方法最近引起了许多关注。 这些方法由于可训练的重量数量要少得多,可以带来可扩缩和计算效果。 在本文中,我们寻找最佳的子网络,调查不同变压器模块将知识从预培训模式向下游任务转移的能力。我们的经验结果表明,BERT中每个变压器模块都可以发挥胜出的作用:微调每个特定模块,同时将网络的其余部分冻结起来,可以导致与完全微调相匹配的性能。 在不同的模块中,TelmNorms展示了以有限的可训练重量进行知识转让的最佳能力,其范围是,在分层分析中,所有参数中只有0.003%的参数,它们显示了各种目标任务中的可接受性能。基于其有效性背后的原因,我们认为,它们的显著性能可以归因于它们与预培训的BERT中其他模块相比的高压度重量。