In real-world underwater environment, exploration of seabed resources, underwater archaeology, and underwater fishing rely on a variety of sensors, vision sensor is the most important one due to its high information content, non-intrusive, and passive nature. However, wavelength-dependent light attenuation and back-scattering result in color distortion and haze effect, which degrade the visibility of images. To address this problem, firstly, we proposed an unsupervised generative adversarial network (GAN) for generating realistic underwater images (color distortion and haze effect) from in-air image and depth map pairs based on improved underwater imaging model. Secondly, U-Net, which is trained efficiently using synthetic underwater dataset, is adopted for color restoration and dehazing. Our model directly reconstructs underwater clear images using end-to-end autoencoder networks, while maintaining scene content structural similarity. The results obtained by our method were compared with existing methods qualitatively and quantitatively. Experimental results obtained by the proposed model demonstrate well performance on open real-world underwater datasets, and the processing speed can reach up to 125FPS running on one NVIDIA 1060 GPU. Source code, sample datasets are made publicly available at https://github.com/infrontofme/UWGAN_UIE.
翻译:在现实世界水下环境中,海底资源勘探、水下考古学和水下捕捞依靠多种传感器,视觉传感器由于其高信息内容、非侵入性和被动性,是最重要的传感器。然而,波长依赖光的衰减和后沙落导致颜色扭曲和烟雾效应,从而降低图像的可见度。为了解决这一问题,首先,我们提议建立一个不受监督的基因对抗网络(GAN),以便根据改进的水下成像模型,从空气中产生现实的水下图像(彩色扭曲和烟雾效应)和深度地图配对。第二,通过合成水下成成成像集进行高效培训的U-Net,用于恢复和淡化颜色。我们的模型直接利用端对端自动电解色网络重建水下清晰图像,同时保持图像内容结构相似性。我们的方法与现有方法进行了定性和定量比较。拟议模型取得的实验结果显示,在开放现实世界水下数据集方面表现良好,处理速度可达125FPSS,用于恢复和淡化合成水下数据集。