Video streams are delivered continuously to save the cost of storage and device memory. Real-time denoising algorithms are typically adopted on the user device to remove the noise involved during the shooting and transmission of video streams. However, sliding-window-based methods feed multiple input frames for a single output and lack computation efficiency. Recent multi-output inference works propagate the bidirectional temporal feature with a parallel or recurrent framework, which either suffers from performance drops on the temporal edges of clips or can not achieve online inference. In this paper, we propose a Bidirectional Streaming Video Denoising (BSVD) framework, to achieve high-fidelity real-time denoising for streaming videos with both past and future temporal receptive fields. The bidirectional temporal fusion for online inference is considered not applicable in the MoViNet. However, we introduce a novel Bidirectional Buffer Block as the core module of our BSVD, which makes it possible during our pipeline-style inference. In addition, our method is concise and flexible to be utilized in both non-blind and blind video denoising. We compare our model with various state-of-the-art video denoising models qualitatively and quantitatively on synthetic and real noise. Our method outperforms previous methods in terms of restoration fidelity and runtime. Our source code is publicly available at https://github.com/ChenyangQiQi/BSVD
翻译:连续提供视频流,以节省存储和存储设备的成本。 实时去除值算法通常在用户设备上采用, 以消除拍摄和传输视频流过程中的噪音。 然而, 滑动窗口法为单一输出提供多个输入框架, 且缺乏计算效率。 最近的多输出推论作品以平行或经常性框架传播双向时间特征, 要么受到短片时间边缘的性能下降的影响, 要么无法实现在线推断。 在本文中, 我们提议双向流流视频Denoising( BSVD) 框架, 以实现高迷幻性实时去除, 用过去和将来的时向可接受字段进行视频流出。 用于在线推断的双向时间融合被认为不适用于 MoViNet 。 然而, 我们引入了一个新的双向缓冲缓冲区块, 作为我们 BSVD的核心模块, 这使得我们在编审调时有可能做到。 此外, 我们的方法既简明又灵活, 在不透视像化的图像流模式中, 我们用不透视像化的模拟和盲透视像化的图像流模式 。