Large language models (LLMs) have demonstrated remarkable performance across a wide range of language processing tasks. However, this success comes at the cost of substantial computation and memory requirements, which significantly impedes their deployment in resource-constrained environments. To address this challenge, this work introduces an automation framework that leverages weight pruning and low-bit quantization, and presents a hardware-software co-design method that generates accelerators on the Field-Programmable Gate Array (FPGA) platform. In particular, we implement a unified pipeline that applies N:M structured pruning and 4-bit integer quantization to reduce the memory footprint, followed by optimized dequantization and matrix multiplication to enhance LLM inference on several hardware platforms, including CPUs, NVIDIA GPUs with Dense and 2:4 Sparse Tensor Cores, and a custom systolic-array-based FPGA accelerator. Utilizing 2:4 sparsity combined with quantization on $4096 \times 4096$ matrices, our approach achieves a reduction of up to $4\times$ in weight storage and a $1.71\times$ speedup in matrix multiplication, yielding a $1.29\times$ end-to-end latency reduction compared to dense GPU baselines. Scaling analysis on the LLaMA-7B model further shows that structured sparsity enhances the throughput per token by $1.36\times$. These results demonstrate the synergy of fine-grained N:M sparsity and quantization for enabling efficient and deployable LLM inference, while the proposed FPGA accelerator offers a flexible architectural path for supporting a broader class of sparsity patterns beyond the fixed 2:4 hardware constraints.
翻译:大型语言模型(LLMs)在广泛的自然语言处理任务中展现出卓越的性能。然而,这一成功伴随着巨大的计算与内存开销,严重阻碍了其在资源受限环境中的部署。为应对这一挑战,本文提出了一种利用权重剪枝与低位量化的自动化框架,并介绍了一种在可编程门阵列(FPGA)平台上生成加速器的软硬件协同设计方法。具体而言,我们实现了一个统一流程:首先应用N:M结构化剪枝与4位整数量化以减少内存占用,随后通过优化的反量化与矩阵乘法操作,在多种硬件平台上提升LLM推理效率,这些平台包括CPU、配备密集与2:4稀疏张量核心的NVIDIA GPU,以及一款基于定制脉动阵列的FPGA加速器。在$4096 \times 4096$矩阵上结合2:4稀疏性与量化,我们的方法实现了权重存储最高$4\times$的降低与矩阵乘法$1.71\times$的加速,相比密集GPU基线,端到端延迟减少了$1.29\times$。在LLaMA-7B模型上的扩展分析进一步表明,结构化稀疏性将每令牌吞吐量提升了$1.36\times$。这些结果证明了细粒度N:M稀疏性与量化在实现高效、可部署LLM推理方面的协同效应,同时所提出的FPGA加速器为支持超越固定2:4硬件约束的更广泛稀疏模式提供了一条灵活的架构路径。