Although many long-range imaging systems are designed to support extended vision applications, a natural obstacle to their operation is degradation due to atmospheric turbulence. Atmospheric turbulence causes significant degradation to image quality by introducing blur and geometric distortion. In recent years, various deep learning-based single image atmospheric turbulence mitigation methods, including CNN-based and GAN inversion-based, have been proposed in the literature which attempt to remove the distortion in the image. However, some of these methods are difficult to train and often fail to reconstruct facial features and produce unrealistic results especially in the case of high turbulence. Denoising Diffusion Probabilistic Models (DDPMs) have recently gained some traction because of their stable training process and their ability to generate high quality images. In this paper, we propose the first DDPM-based solution for the problem of atmospheric turbulence mitigation. We also propose a fast sampling technique for reducing the inference times for conditional DDPMs. Extensive experiments are conducted on synthetic and real-world data to show the significance of our model. To facilitate further research, all codes and pretrained models will be made public after the review process.
翻译:虽然许多长程成像系统的设计是为了支持扩大的视觉应用,但其操作的自然障碍是大气动荡造成的退化。大气动荡通过引入模糊和几何扭曲,导致图像质量严重退化。近年来,文献中提出了各种深层次的学习基础上的单一图像大气扰动减缓方法,包括CNN和GAN反射法,试图消除图像的扭曲。然而,其中一些方法难以培训,而且往往无法重建面部特征,产生不切实际的结果,特别是在高气流的情况下。低调扩散稳定模型(DDPMs)最近获得了一些牵引力,因为它们具有稳定的培训过程和生成高质量图像的能力。在本文件中,我们提出了第一种基于DDPM的缓解大气扰动问题的快速采样技术。我们还提出了减少有条件DDPMs的误判时间的快速采样技术。对合成和真实世界数据进行了广泛的实验,以显示我们模型的重要性。为了便利进一步的研究,所有代码和预先培训的模型将在审查进程后公布。