Post-training quantization (PTQ) aims to preserve model-level behavior; however, most methods focus on individual linear layers. Even recent extensions, such as QEP and LoaQ, which mitigate error propagation or target specific submodules, still rely on layer-wise formulations and fail to capture the behavior of larger submodules. We introduce Layer-Projected Coordinate Descent (LPCD), a unified framework that extends PTQ beyond layers by optimizing relaxed objectives across arbitrary submodules and projecting the solutions with layer-wise quantizers. LPCD generalizes existing methods and provides a principled approach to quantizing complex submodules while maintaining the efficiency and compatibility of layer-wise PTQ pipelines. Across diverse LLM architectures and bit-widths, LPCD-based submodule quantization consistently enhances both layer-wise PTQ methods and existing submodule approaches.
翻译:训练后量化(PTQ)旨在保持模型级行为;然而,大多数方法仅关注单个线性层。即使最近的扩展方法(如QEP和LoaQ)能够缓解误差传播或针对特定子模块,它们仍依赖于层间公式,未能捕捉更大子模块的行为。本文提出层投影坐标下降(LPCD),这是一个统一框架,通过优化任意子模块的松弛目标,并利用层间量化器投影解,将PTQ扩展到层之外。LPCD泛化了现有方法,为量化复杂子模块提供了原则性途径,同时保持了层间PTQ流程的效率和兼容性。在多种LLM架构和比特宽度下,基于LPCD的子模块量化持续提升了层间PTQ方法和现有子模块方法的性能。