This paper presents a systematic overview and comparison of parameter-efficient fine-tuning methods covering over 40 papers published between February 2019 and February 2023. These methods aim to resolve the infeasibility and impracticality of fine-tuning large language models by only training a small set of parameters. We provide a taxonomy that covers a broad range of methods and present a detailed method comparison with a specific focus on real-life efficiency and fine-tuning multibillion-scale language models.
翻译:本文介绍了超过40篇发表在2019年2月至2023年2月期间的参数高效微调方法的系统概述和比较。这些方法旨在通过仅训练少量参数来解决微调大型语言模型的不可行性和不实用性。我们提供了一个涵盖广泛方法的分类法,并重点对实际效率和微调数十亿级语言模型进行了详细方法比较。