While the enormous parameter scale endows Large Models (LMs) with unparalleled performance, it also limits their adaptability across specific tasks. Parameter-Efficient Fine-Tuning (PEFT) has emerged as a critical approach for effectively adapting LMs to a diverse range of downstream tasks. However, existing PEFT methods face two primary challenges: (1) High resource cost. Although PEFT methods significantly reduce resource demands compared to full fine-tuning, it still requires substantial time and memory, making it impractical in resource-constrained environments. (2) Parameter dependency. PEFT methods heavily rely on updating a subset of parameters associated with LMs to incorporate task-specific knowledge. Yet, due to increasing competition in the LMs landscape, many companies have adopted closed-source policies for their leading models, offering access only via Application Programming Interface (APIs). Whereas, the expense is often cost-prohibitive and difficult to sustain, as the fine-tuning process of LMs is extremely slow. Even if small models perform far worse than LMs in general, they can achieve superior results on particular distributions while requiring only minimal resources. Motivated by this insight, we propose Easy Adaptation (EA), which designs Specific Small Models (SSMs) to complement the underfitted data distribution for LMs. Extensive experiments show that EA matches the performance of PEFT on diverse tasks without accessing LM parameters, and requires only minimal resources.
翻译:尽管庞大的参数量赋予大模型无与伦比的性能,但也限制了其在特定任务上的适应性。参数高效微调已成为将大模型有效适配至多样化下游任务的关键方法。然而,现有参数高效微调方法面临两大挑战:(1)高资源成本。尽管相较于全参数微调,参数高效微调方法显著降低了资源需求,但仍需大量时间和内存,在资源受限环境中难以实施。(2)参数依赖性。参数高效微调方法严重依赖更新与大模型关联的参数子集以融入任务特定知识。然而,由于大模型领域竞争日益激烈,许多公司对其领先模型采取了闭源策略,仅通过应用程序接口提供访问权限。由于大模型微调过程极其缓慢,相关费用往往高昂且难以持续。尽管小模型在通用性能上远逊于大模型,但在特定数据分布上仅需极少资源即可取得更优结果。受此启发,我们提出简易适配方法,通过设计特定小模型来补足大模型欠拟合的数据分布。大量实验表明,该方法在不访问大模型参数的情况下,可在多种任务上达到与参数高效微调相当的性能,且仅需极少资源。