Existing open-source film restoration methods show limited performance compared to commercial methods due to training with low-quality synthetic data and employing noisy optical flows. In addition, high-resolution films have not been explored by the open-source methods.We propose HaineiFRDM(Film Restoration Diffusion Model), a film restoration framework, to explore diffusion model's powerful content-understanding ability to help human expert better restore indistinguishable film defects.Specifically, we employ a patch-wise training and testing strategy to make restoring high-resolution films on one 24GB-VRAMR GPU possible and design a position-aware Global Prompt and Frame Fusion Modules.Also, we introduce a global-local frequency module to reconstruct consistent textures among different patches. Besides, we firstly restore a low-resolution result and use it as global residual to mitigate blocky artifacts caused by patching process.Furthermore, we construct a film restoration dataset that contains restored real-degraded films and realistic synthetic data.Comprehensive experimental results conclusively demonstrate the superiority of our model in defect restoration ability over existing open-source methods. Code and the dataset will be released.
翻译:现有的开源影片修复方法因使用低质量合成数据进行训练并采用噪声光流,其性能相较于商业方法存在局限。此外,开源方法尚未充分探索高分辨率影片的修复。我们提出HaineiFRDM(影片修复扩散模型),一种基于扩散模型的影片修复框架,旨在利用扩散模型强大的内容理解能力,辅助人类专家更好地修复难以辨识的影片缺陷。具体而言,我们采用分块训练与测试策略,使得在单张24GB显存的GPU上修复高分辨率影片成为可能,并设计了位置感知的全局提示与帧融合模块。同时,我们引入全局-局部频率模块以重建不同分块间一致的纹理。此外,我们首先恢复一个低分辨率结果,并将其作为全局残差以减轻分块处理导致的块状伪影。进一步地,我们构建了一个包含已修复的真实退化影片与逼真合成数据的影片修复数据集。综合实验结果充分证明了我们的模型在缺陷修复能力上优于现有开源方法。代码与数据集将予以公开。