Hidden Markov models (HMM) are commonly used in generation tasks and have demonstrated strong capabilities in neuro-symbolic applications for the Markov property. These applications leverage the strengths of neural networks and symbolic reasoning to create robust and interpretable AI systems. However, they may inherit and amplify the shortcomings of both approaches. Both components require dense computation and data transfer, and their communication further hinders performance. This paper proposes Norm-Q, a normalized linear quantization approach for compressing probabilistic symbolic models, such as HMMs. We reduce the bit width of the data with minimal impact, thereby alleviating memory and bandwidth stress and enabling deployment on potential custom hardware. Our method introduces a normalized quantization-aware expectation maximization process for probabilistic model training. The experimental results show that Norm-Q achieves a higher compression rate with reasonable score loss compared to traditional quantization methods. In the case of the constrained generation task of large language models, we successfully quantize an HMM of 4096 hidden states to 8 bits without loss and, at most, 3 bits with acceptable loss. Notably, the Norm-Q method can achieve a compression rate of 99% for the weights of the HMM. The code is open source at https://github.com/superstarghy/Norm-Q.
翻译:隐马尔可夫模型(HMM)在生成任务中应用广泛,并因其马尔可夫性质在神经符号应用中展现出强大能力。这类应用结合了神经网络与符号推理的优势,以构建鲁棒且可解释的人工智能系统。然而,它们也可能继承并放大两种方法的缺点。两个组件均需要密集的计算与数据传输,其间的通信进一步制约了性能。本文提出Norm-Q,一种用于压缩概率符号模型(如HMM)的归一化线性量化方法。我们在影响最小的前提下降低数据位宽,从而缓解内存与带宽压力,并使其能够部署在潜在的定制硬件上。我们的方法为概率模型训练引入了归一化的量化感知期望最大化过程。实验结果表明,与传统量化方法相比,Norm-Q在可接受的分数损失下实现了更高的压缩率。在大语言模型的约束生成任务案例中,我们成功地将一个4096隐状态的HMM无损量化为8比特,并在可接受的损失下最多压缩至3比特。值得注意的是,Norm-Q方法对HMM权重的压缩率可达99%。代码已在 https://github.com/superstarghy/Norm-Q 开源。