Post-training quantization (\ptq) had been recently shown as a compromising method to reduce the memory consumption and/or compute cost for large language models. However, a comprehensive study about the effect of different quantization schemes, different model families, different \ptq methods, different quantization bit precision, etc, is still missing. In this work, we provide an extensive study on those components over tens of thousands of zero-shot experiments. Our results show that (1) Fine-grained quantization and \ptq methods (instead of naive round-to-nearest quantization) are necessary to achieve good accuracy and (2) Higher bits (e.g., 5 bits) with coarse-grained quantization is more powerful than lower bits (e.g., 4 bits) with very fine-grained quantization (whose effective bits is similar to 5-bits). We also present recommendations about how to utilize quantization for \llms with different sizes, and leave suggestions of future opportunities and system work that are not resolved in this work.
翻译:培训后量化(\ptq)最近被证明是减少大型语言模型记忆消耗和(或)计算成本的一种妥协方法。然而,关于不同量化计划、不同模型家庭、不同模型方法、不同定量方法、不同位数精确度等的综合研究仍然缺失。在这项工作中,我们对数万个零点实验中的这些成分进行了广泛的研究。我们的结果显示:(1) 精度量化和\ptq方法(而不是天真的圆到近距离量化)对于实现良好准确性是必要的。 (2) 高位(例如5位)与粗度量化相比(例如4位)威力更大,比低位(例如4位)的精细度量化(其有效位数与5位数相似)。我们还就如何利用定量处理不同大小的\lls 问题提出建议,并对未来机会和系统工作提出建议,但这项工作没有解决。</s>