Image restoration algorithms for atmospheric turbulence are known to be much more challenging to design than traditional ones such as blur or noise because the distortion caused by the turbulence is an entanglement of spatially varying blur, geometric distortion, and sensor noise. Existing CNN-based restoration methods built upon convolutional kernels with static weights are insufficient to handle the spatially dynamical atmospheric turbulence effect. To address this problem, in this paper, we propose a physics-inspired transformer model for imaging through atmospheric turbulence. The proposed network utilizes the power of transformer blocks to jointly extract a dynamical turbulence distortion map and restore a turbulence-free image. In addition, recognizing the lack of a comprehensive dataset, we collect and present two new real-world turbulence datasets that allow for evaluation with both classical objective metrics (e.g., PSNR and SSIM) and a new task-driven metric using text recognition accuracy. Both real testing sets and all related code will be made publicly available.
翻译:已知大气动荡的图像恢复算法比传统的算法,如模糊或噪音等,对设计更具挑战性,因为动荡造成的扭曲是空间差异模糊、几何扭曲和感应噪音的纠缠。现有CNN在具有静态重量的电动内核上建立的有线电视新闻网的恢复方法不足以处理空间动态大气动荡效应。为了解决这一问题,我们在本文件中提议了一个物理学启发的通过大气动荡进行成像的变压器模型。拟议的网络利用变压器块的力量联合绘制动态动荡扭曲图和恢复无动荡图像。此外,我们认识到缺乏一个全面的数据集,我们收集并展示了两个新的真实世界动荡数据集,以便用传统的客观指标(例如PSNR和SSIM)进行评估,以及使用文本识别精度的新的任务驱动参数进行评估。真正的测试组和所有相关的代码都将公布于众。