The rapid advancement of foundation modelslarge-scale neural networks trained on diverse, extensive datasetshas revolutionized artificial intelligence, enabling unprecedented advancements across domains such as natural language processing, computer vision, and scientific discovery. However, the substantial parameter count of these models, often reaching billions or trillions, poses significant challenges in adapting them to specific downstream tasks. Low-Rank Adaptation (LoRA) has emerged as a highly promising approach for mitigating these challenges, offering a parameter-efficient mechanism to fine-tune foundation models with minimal computational overhead. This survey provides the first comprehensive review of LoRA techniques beyond large Language Models to general foundation models, including recent techniques foundations, emerging frontiers and applications of low-rank adaptation across multiple domains. Finally, this survey discusses key challenges and future research directions in theoretical understanding, scalability, and robustness. This survey serves as a valuable resource for researchers and practitioners working with efficient foundation model adaptation.
翻译:基础模型——在多样化、大规模数据集上训练的大型神经网络——的快速发展彻底改变了人工智能领域,在自然语言处理、计算机视觉和科学发现等多个领域实现了前所未有的进步。然而,这些模型通常具有数十亿甚至数万亿的参数量,在将其适配到特定下游任务时带来了巨大挑战。低秩适配(LoRA)作为一种极具前景的方法应运而生,它提供了一种参数高效的机制,能以最小的计算开销对基础模型进行微调。本文首次对LoRA技术进行了全面综述,不仅涵盖大型语言模型,还扩展到通用基础模型,包括近期技术基础、新兴前沿以及低秩适配在多个领域的应用。最后,本文讨论了理论理解、可扩展性和鲁棒性方面的关键挑战与未来研究方向。本综述为从事高效基础模型适配的研究人员和实践者提供了重要参考。