Learning to recover clear images from images having a combination of degrading factors is a challenging task. That being said, autonomous surveillance in low visibility conditions caused by high pollution/smoke, poor air quality index, low light, atmospheric scattering, and haze during a blizzard becomes even more important to prevent accidents. It is thus crucial to form a solution that can result in a high-quality image and is efficient enough to be deployed for everyday use. However, the lack of proper datasets available to tackle this task limits the performance of the previous methods proposed. To this end, we generate the LowVis-AFO dataset, containing 3647 paired dark-hazy and clear images. We also introduce a lightweight deep learning model called Low-Visibility Restoration Network (LVRNet). It outperforms previous image restoration methods with low latency, achieving a PSNR value of 25.744 and an SSIM of 0.905, making our approach scalable and ready for practical use. The code and data can be found at https://github.com/Achleshwar/LVRNet.
翻译:一项艰巨的任务,就是在高污染/烟雾、低空气质量指数、低光、大气散射和暴风雪中烟雾造成的低可见度条件下进行自主监测,对于预防事故来说,甚至更加重要,因此,必须形成一种解决办法,能够产生高质量的图像,并具有足够的效率,用于日常使用;然而,由于缺乏适当的数据集来应对这一任务,限制了先前提议的方法的性能。为此,我们制作了低Vis-AFO数据集,其中包含了3647个对称的黑暗光亮和清晰图像。我们还采用了一种称为低可见度恢复网络(LVRNet)的轻度深度学习模型,它比以往的图像恢复方法低薄,实现了25.744的PSNR值和0.905的SSIM值,使我们的方法可升级并可供实际使用。