Post-training quantization (\ptq) had been recently shown as a compromising method to reduce memory consumption and/or compute cost for large language models. However, a comprehensive study about the effect of different quantization schemes, different model families, different \ptq methods, different quantization bit precision, etc, is still missing. In this work, we provide an extensive study of those components over tens of thousands of zero-shot experiments. Our results show that (1) Fine-grained quantization and \ptq methods (instead of naive round-to-nearest quantization) are necessary to achieve good accuracy and (2) Higher bits (e.g., 5 bits) with coarse-grained quantization is more powerful than lower bits (e.g., 4 bits) with very fine-grained quantization (whose effective bit precision is similar to 5 bits). We also present recommendations about how to utilize quantization for \llms with different sizes, and leave suggestions of future opportunities and system work that are not resolved in this work.
翻译:培训后量化(\ ptq)最近被证明是降低大语言模型记忆消耗和/或计算成本的一种妥协方法。 但是,关于不同量化计划、不同模型家庭、不同模型方法、不同量化方法、不同量化位精度等的综合研究仍然缺失。 在这项工作中,我们对数万个零射实验中的这些成分进行了广泛的研究。我们的结果显示:(1) 精度定量化和\ptq方法(而不是天真的圆到earn量化方法)对于实现良好准确性是必要的。 (2) 高位(例如5位)与低位(例如4位)的精度定量相比,其威力更大,(其有效的位精度精确性与5位相近)。 我们还就如何利用定量处理不同尺寸的量化方法提出建议,并对这项工作中未解决的未来机会和系统工作提出建议。</s>