Decoder-only autoregressive image generation typically relies on fixed-length tokenization schemes whose token counts grow quadratically with resolution, substantially increasing the computational and memory demands of attention. We present DPAR, a novel decoder-only autoregressive model that dynamically aggregates image tokens into a variable number of patches for efficient image generation. Our work is the first to demonstrate that next-token prediction entropy from a lightweight and unsupervised autoregressive model provides a reliable criterion for merging tokens into larger patches based on information content. DPAR makes minimal modifications to the standard decoder architecture, ensuring compatibility with multimodal generation frameworks and allocating more compute to generation of high-information image regions. Further, we demonstrate that training with dynamically sized patches yields representations that are robust to patch boundaries, allowing DPAR to scale to larger patch sizes at inference. DPAR reduces token count by 1.81x and 2.06x on Imagenet 256 and 384 generation resolution respectively, leading to a reduction of up to 40% FLOPs in training costs. Further, our method exhibits faster convergence and improves FID by up to 27.1% relative to baseline models.
翻译:仅解码器的自回归图像生成模型通常依赖于固定长度的标记化方案,其标记数量随分辨率呈二次增长,显著增加了注意力机制的计算和内存需求。本文提出DPAR,一种新颖的仅解码器自回归模型,能够将图像标记动态聚合成数量可变的图像块以实现高效图像生成。本研究首次证明:基于轻量级无监督自回归模型的下一个标记预测熵,可为根据信息量将标记合并为更大图像块提供可靠准则。DPAR对标准解码器架构仅进行最小修改,确保与多模态生成框架的兼容性,并将更多计算资源分配给高信息量图像区域的生成。此外,我们证明使用动态尺寸图像块进行训练能够获得对块边界具有鲁棒性的表征,使得DPAR在推理阶段可扩展至更大块尺寸。在Imagenet 256和384生成分辨率下,DPAR分别将标记数量降低1.81倍和2.06倍,训练成本中FLOPs最多减少40%。进一步地,相较于基线模型,本方法展现出更快的收敛速度,并将FID指标最高提升27.1%。