Diffusion-based video super-resolution (VSR) methods achieve strong perceptual quality but remain impractical for latency-sensitive settings due to reliance on future frames and expensive multi-step denoising. We propose Stream-DiffVSR, a causally conditioned diffusion framework for efficient online VSR. Operating strictly on past frames, it combines a four-step distilled denoiser for fast inference, an Auto-regressive Temporal Guidance (ARTG) module that injects motion-aligned cues during latent denoising, and a lightweight temporal-aware decoder with a Temporal Processor Module (TPM) that enhances detail and temporal coherence. Stream-DiffVSR processes 720p frames in 0.328 seconds on an RTX4090 GPU and significantly outperforms prior diffusion-based methods. Compared with the online SOTA TMP, it boosts perceptual quality (LPIPS +0.095) while reducing latency by over 130x. Stream-DiffVSR achieves the lowest latency reported for diffusion-based VSR, reducing initial delay from over 4600 seconds to 0.328 seconds, thereby making it the first diffusion VSR method suitable for low-latency online deployment. Project page: https://jamichss.github.io/stream-diffvsr-project-page/
翻译:基于扩散模型的视频超分辨率方法虽能实现卓越的感知质量,但由于其依赖未来帧及昂贵的多步去噪过程,在延迟敏感场景中仍不实用。本文提出Stream-DiffVSR,一种用于高效在线视频超分辨率的因果条件扩散框架。该方法严格基于历史帧进行处理,融合了以下核心组件:用于快速推理的四步蒸馏去噪器、在潜在去噪过程中注入运动对齐线索的自回归时序引导模块,以及配备时序处理器模块的轻量级时序感知解码器(该解码器可增强细节与时序一致性)。Stream-DiffVSR在RTX4090 GPU上处理720p帧仅需0.328秒,性能显著超越现有基于扩散的方法。与当前在线SOTA方法TMP相比,它在提升感知质量(LPIPS指标提升0.095)的同时,将延迟降低了130倍以上。Stream-DiffVSR实现了目前基于扩散的视频超分辨率方法中最低的延迟,将初始延迟从超过4600秒缩短至0.328秒,从而成为首种适用于低延迟在线部署的扩散视频超分辨率方法。项目页面:https://jamichss.github.io/stream-diffvsr-project-page/