In Diffusion Transformer (DiT) models, particularly for video generation, attention latency is a major bottleneck due to the long sequence length and the quadratic complexity. We find that attention weights can be separated into two parts: a small fraction of large weights with high rank and the remaining weights with very low rank. This naturally suggests applying sparse acceleration to the first part and low-rank acceleration to the second. Based on this finding, we propose SLA (Sparse-Linear Attention), a trainable attention method that fuses sparse and linear attention to accelerate diffusion models. SLA classifies attention weights into critical, marginal, and negligible categories, applying O(N^2) attention to critical weights, O(N) attention to marginal weights, and skipping negligible ones. SLA combines these computations into a single GPU kernel and supports both forward and backward passes. With only a few fine-tuning steps using SLA, DiT models achieve a 20x reduction in attention computation, resulting in significant acceleration without loss of generation quality. Experiments show that SLA reduces attention computation by 95% without degrading end-to-end generation quality, outperforming baseline methods. In addition, we implement an efficient GPU kernel for SLA, which yields a 13.7x speedup in attention computation and a 2.2x end-to-end speedup in video generation on Wan2.1-1.3B. The code is available at https://github.com/thu-ml/SLA.
翻译:在扩散Transformer(DiT)模型中,特别是针对视频生成任务,注意力机制的延迟是主要瓶颈,这源于长序列长度及其二次复杂度。我们发现注意力权重可分为两部分:一小部分具有高秩的大权重,以及其余具有极低秩的权重。这自然启示我们对第一部分应用稀疏加速,对第二部分应用低秩加速。基于这一发现,我们提出SLA(稀疏线性注意力),一种可训练的注意力方法,融合稀疏注意力和线性注意力以加速扩散模型。SLA将注意力权重分为关键、边缘和可忽略三类:对关键权重应用O(N^2)注意力计算,对边缘权重应用O(N)注意力计算,并跳过可忽略权重。SLA将这些计算整合至单个GPU内核中,并支持前向与反向传播。仅需使用SLA进行少量微调步骤,DiT模型的注意力计算量即可降低20倍,在保持生成质量的同时实现显著加速。实验表明,SLA在未降低端到端生成质量的情况下将注意力计算减少95%,性能优于基线方法。此外,我们为SLA实现了高效的GPU内核,在Wan2.1-1.3B模型上实现了注意力计算13.7倍加速和视频生成端到端2.2倍加速。代码已开源:https://github.com/thu-ml/SLA。