Mixture-of-experts-based (MoE-based) diffusion models have shown their scalability and ability to generate high-quality images, making them a promising choice for efficient model scaling. However, they rely on expert parallelism across GPUs, necessitating efficient parallelism optimization. While state-of-the-art diffusion parallel inference methods overlap communication and computation via displaced operations, they introduce substantial staleness -- the utilization of outdated activations, which is especially severe in expert parallelism scenarios and leads to significant performance degradation. We identify this staleness issue and propose DICE, a staleness-centric optimization with a three-fold approach: (1) Interweaved Parallelism reduces step-level staleness for free while overlapping communication and computation; (2) Selective Synchronization operates at layer-level and protects critical layers vulnerable from staled activations; and (3) Conditional Communication, a token-level, training-free method that dynamically adjusts communication frequency based on token importance. Together, these optimizations effectively reduce staleness, achieving up to 1.2x speedup with minimal quality degradation. Our results establish DICE as an effective, scalable solution for large-scale MoE-based diffusion model inference.
翻译:暂无翻译