We present a self-supervised and self-calibrating multi-shot approach to imaging through atmospheric turbulence, called TurbuGAN. Our approach requires no paired training data, adapts itself to the distribution of the turbulence, leverages domain-specific data priors, and can generalize from tens to thousands of measurements. We achieve such functionality through an adversarial sensing framework adapted from CryoGAN, which uses a discriminator network to match the distributions of captured and simulated measurements. Our framework builds on CryoGAN by (1) generalizing the forward measurement model to incorporate physically accurate and computationally efficient models for light propagation through anisoplanatic turbulence, (2) enabling adaptation to slightly misspecified forward models, and (3) leveraging domain-specific prior knowledge using pretrained generative networks, when available. We validate TurbuGAN on both computationally simulated and experimentally captured images distorted with anisoplanatic turbulence.
翻译:我们提出一种自我监督和自我校准的多镜头方法,称为TurbuGAN。我们的方法不需要配对培训数据,需要适应气流的分布,利用具体领域的数据前期,并能够从数万到千个测量法。我们通过从CryeoGAN改编的对抗性遥感框架实现了这种功能。 CryoGAN使用歧视者网络来匹配所捕获和模拟测量的分布。我们的框架建立在CryeoGAN上,其方法是:(1) 推广前方测量模型,以纳入通过厌食性气流进行光传播的物理准确和计算效率高的模型;(2) 能够适应略微错误的远期模型;(3) 利用现有的经过事先训练的基因化网络,利用特定领域先前的知识。我们验证TurbuGAN在模拟和实验性采集的图像上都使用模拟和模拟的图层变化。