Parameter efficient learning methods (PERMs) have recently gained significant attention as they provide an efficient way for pre-trained language models (PLMs) to adapt to a downstream task. However, these conclusions are mostly drawn from in-domain evaluations over the full training set. In this paper, we present comparisons between PERMs and finetuning from three new perspectives: (1) the effect of sample and model size to in-domain evaluations, (2) generalization to unseen domains and new datasets, and (3) the faithfulness of generations. Our results show that for in-domain settings (a) there is a cross point of sample size for which PERMs will perform better than finetuning when training with fewer samples, and (b) larger PLMs have larger cross points. For cross-domain and cross-dataset cases, we show that (a) Adapter (Houlsby et al., 2019) performs the best amongst all the PERMs studied here, and (b) it outperforms finetuning if the task dataset is below a certain size. We also compare the faithfulness of generations and show that PERMs can achieve better faithfulness score than finetuning, especially for small training set, by as much as 6%. Finally, we apply Adapter to MT-NLG 530b (Smith et al., 2022) and achieve new state-of-the-art results on Xsum (Narayan et al., 2018) for all ROUGE scores (ROUGE-1 49.17, ROUGE-2 27.20, ROUGE-L 40.98).
翻译:高效学习方法(PERMS)最近受到极大关注,因为它们为培训前语言模型(PLMS)适应下游任务提供了有效的途径。然而,这些结论大多来自全套培训的实地评估。在本文件中,我们对PERMS进行了比较,并从三个新角度进行微调:(1) 样本和模型规模对内部评估的影响,(2) 概括到隐蔽域和新的数据集,(3) 几代人的忠诚性。我们的结果显示,对于内部环境环境环境而言,(a) 有跨点的样本规模,在用较少的样本进行培训时,PERMS将比微调更好;以及(b) 更大的PLMS有较大的交叉点。对于跨部和交叉数据案例,我们显示:(a) 调整器(Houlby等人等人,2019年) 在所有在这里研究的PERMS中表现最佳;以及(b) 如果任务数据集低于一定大小,则会比任务数据集更精确。 我们还比较了几代的准确度,并且显示PERMS- 能够达到比精细的准确的排名,最后为20, 和正值。(我们应用了RGLS-rBS-L-20)