Atmospheric turbulence distorts visual imagery and is always problematic for information interpretation by both human and machine. Most well-developed approaches to remove atmospheric turbulence distortion are model-based. However, these methods require high computation and large memory making real-time operation infeasible. Deep learning-based approaches have hence gained more attention but currently work efficiently only on static scenes. This paper presents a novel learning-based framework offering short temporal spanning to support dynamic scenes. We exploit complex-valued convolutions as phase information, altered by atmospheric turbulence, is captured better than using ordinary real-valued convolutions. Two concatenated modules are proposed. The first module aims to remove geometric distortions and, if enough memory, the second module is applied to refine micro details of the videos. Experimental results show that our proposed framework efficiently mitigates the atmospheric turbulence distortion and significantly outperforms existing methods.
翻译:大气动荡扭曲了视觉图像,对人和机器的信息解释总是有问题。最发达的消除大气动荡扭曲的方法都是以模型为基础的。然而,这些方法需要高计算和大记忆才能实时操作不可行。深学习方法因此得到更多关注,但目前只在静态场景上有效工作。本文提出了一个全新的学习框架,提供短时间跨度以支持动态场景。我们利用复杂价值的变异,因为被大气动荡改变的阶段性信息比使用普通的、实际价值高的共变模型要好。提出了两个组合模块。第一个模块的目的是消除几何扭曲,如果有足够的记忆,第二个模块将被用于完善视频的微观细节。实验结果表明,我们提议的框架有效地缓解了大气动荡的扭曲,大大超越了现有方法。