We present DeblurGAN, an end-to-end learned method for motion deblurring. The learning is based on a conditional GAN and the content loss . DeblurGAN achieves state-of-the art performance both in the structural similarity measure and visual appearance. The quality of the deblurring model is also evaluated in a novel way on a real-world problem -- object detection on (de-)blurred images. The method is 5 times faster than the closest competitor -- DeepDeblur. We also introduce a novel method for generating synthetic motion blurred images from sharp ones, allowing realistic dataset augmentation. The model, code and the dataset are available at https://github.com/KupynOrest/DeblurGAN
翻译:我们展示了DevlurGAN, 这是一种从端到端的运动变形的学习方法。 学习的基础是有条件的 GAN 和内容损失 。 DeblurGAN 在结构相似度测量和视觉外观上都取得了最新水平的艺术表现。 破坏模型的质量也以新颖的方式在现实世界问题上进行了评价 -- -- 在( de)blurred 图像上进行物体探测。 这个方法比最近的竞争者 -- -- DeepDeepDeblur 更快5倍。 我们还引入了一种新颖的方法来生成从锐利图像中模糊的合成动作,允许现实的数据集增强。 模型、 代码和数据集可以在 https://github.com/ KupynOrest/DeblurGAN 上查阅。