Underwater image suffer from color cast, low contrast and hazy effect due to light absorption, refraction and scattering, which degraded the high-level application, e.g, object detection and object tracking. Recent learning-based methods demonstrate astonishing performance on underwater image enhancement, however, most of these works use synthetic pair data for supervised learning and ignore the domain gap to real-world data. To solve this problem, we propose a domain adaptation framework for underwater image enhancement via content and style separation, different from prior works of domain adaptation for underwater image enhancement, which target to minimize the latent discrepancy of synthesis and real-world data, we aim to separate encoded feature into content and style latent and distinguish style latent from different domains, i.e. synthesis, real-world underwater and clean domain, and process domain adaptation and image enhancement in latent space. By latent manipulation, our model provide a user interact interface to adjust different enhanced level for continuous change. Experiment on various public real-world underwater benchmarks demonstrate that the proposed framework is capable to perform domain adaptation for underwater image enhancement and outperform various state-of-the-art underwater image enhancement algorithms in quantity and quality. The model and source code will be available at https://github.com/fordevoted/UIESS
翻译:水下图像因光吸收、折射和散射而受到色化、低对比和模糊效应的影响,这些影响使高水平应用(例如物体探测和物体跟踪)退化。最近以学习为基础的方法显示水下图像增强的惊人性能。然而,大多数这些工程都使用合成配对数据来监督学习,而忽略领域间距以真实世界的数据。为了解决这个问题,我们提议了一个通过内容和风格分离提高水下图像的域适应框架,这不同于以往的水下图像增强域适应工作,目的是尽量减少合成和现实世界数据的潜在差异,我们的目标是将编码特性分离成内容和风格潜伏,并将风格潜伏与不同领域区分开来,即合成、真实世界水下和清洁域,以及潜在空间中过程域的调整和图像增强。通过潜在的操纵,我们的模型为用户提供了一个互动界面,以调整不同程度的提高的连续变化。对各种公共现实世界水下基准的实验表明,拟议的框架能够对水下图像增强进行域适应,并超越各种状态水下图像增强的状态/风格,在数量和质量上进行模型和源代码将可访问。