State Space Models (SSMs) are emerging as a compelling alternative to Transformers because of their consistent memory usage and high performance. Despite this, scaling up SSMs on cloud services or limited-resource devices is challenging due to their storage requirements and computational power. To overcome this, quantizing SSMs with low bit-width data formats can reduce model size and benefit from hardware acceleration. As SSMs are prone to quantization-induced errors, recent efforts have focused on optimizing a particular model or bit-width for efficiency without sacrificing performance. However, distinct bit-width configurations are essential for different scenarios, like W4A8 for boosting large-batch decoding speed, and W4A16 for enhancing generation speed in short prompt applications for a single user. To this end, we present Quamba2, compatible with W8A8, W4A8, and W4A16 for both Mamba1 and Mamba2 backbones, addressing the growing demand for SSM deployment on various platforms. Based on the channel order preserving and activation persistence of SSMs, we propose an offline approach to quantize inputs of a linear recurrence in 8-bit by sorting and clustering for input $x$, combined with a per-state-group quantization for input-dependent parameters $B$ and $C$. To ensure compute-invariance in the SSM output, we rearrange weights offline according to the clustering sequence. The experiments show that Quamba2-8B outperforms two state-of-the-art SSM quantization methods and delivers 1.3$\times$ and 3$\times$ speed-ups in the pre-filling and generation stages, respectively, while offering 4$\times$ memory reduction with only a $1.6\%$ average accuracy drop. The evaluation on MMLU shows the generalizability and robustness of our framework. The code and quantized models will be released at: https://github.com/enyac-group/Quamba.
翻译:状态空间模型(SSMs)因其稳定的内存占用和卓越性能,正逐渐成为Transformer的有力替代方案。然而,由于SSMs的存储需求和计算能力要求,在云服务或资源受限设备上扩展SSMs面临挑战。为此,采用低比特位宽数据格式对SSMs进行量化,可有效减小模型规模并利用硬件加速优势。鉴于SSMs对量化误差较为敏感,近期研究主要集中于针对特定模型或比特位宽进行优化,以在保持性能的同时提升效率。然而,不同应用场景需要不同的比特位宽配置,例如W4A8适用于提升大批量解码速度,而W4A16则更适合在单用户短提示应用中提高生成速度。为此,我们提出了Quamba2,该框架兼容W8A8、W4A8和W4A16量化配置,并支持Mamba1和Mamba2骨干网络,以满足SSM在各种平台上日益增长的部署需求。基于SSMs的通道顺序保持和激活持续性特性,我们提出了一种离线量化方法:通过对输入$x$进行排序和聚类,以8比特量化线性递归的输入,同时对输入相关参数$B$和$C$采用按状态组分组的量化策略。为确保SSM输出的计算不变性,我们根据聚类序列离线重排权重。实验表明,Quamba2-8B在性能上优于两种最先进的SSM量化方法,在预填充和生成阶段分别实现了1.3倍和3倍的加速,同时以仅1.6%的平均精度损失换取了4倍的内存缩减。在MMLU基准上的评估验证了我们框架的泛化能力和鲁棒性。代码及量化模型将在以下地址发布:https://github.com/enyac-group/Quamba。