This paper presents PipeFusion, an innovative parallel methodology to tackle the high latency issues associated with generating high-resolution images using diffusion transformers (DiTs) models. PipeFusion partitions images into patches and the model layers across multiple GPUs. It employs a patch-level pipeline parallel strategy to orchestrate communication and computation efficiently. By capitalizing on the high similarity between inputs from successive diffusion steps, PipeFusion reuses one-step stale feature maps to provide context for the current pipeline step. This approach notably reduces communication costs compared to existing DiTs inference parallelism, including tensor parallel, sequence parallel and DistriFusion. PipeFusion also exhibits superior memory efficiency, because it can distribute model parameters across multiple devices, making it more suitable for DiTs with large parameter sizes, such as Flux.1. Experimental results demonstrate that PipeFusion achieves state-of-the-art performance on 8xL40 PCIe GPUs for Pixart, Stable-Diffusion 3 and Flux.1 models.Our Source code is available at https://github.com/xdit-project/xDiT.
翻译:暂无翻译