Recovery of true color from underwater images is an ill-posed problem. This is because the wide-band attenuation coefficients for the RGB color channels depend on object range, reflectance, etc. which are difficult to model. Also, there is backscattering due to suspended particles in water. Thus, most existing deep-learning based color restoration methods, which are trained on synthetic underwater datasets, do not perform well on real underwater data. This can be attributed to the fact that synthetic data cannot accurately represent real conditions. To address this issue, we use an image to image translation network to bridge the gap between the synthetic and real domains by translating images from synthetic underwater domain to real underwater domain. Using this multimodal domain adaptation technique, we create a dataset that can capture a diverse array of underwater conditions. We then train a simple but effective CNN based network on our domain adapted dataset to perform color restoration. Code and pre-trained models can be accessed at https://github.com/nehamjain10/TRUDGCR
翻译:从水下图像中获取真实颜色是一个错误的问题。 这是因为 RGB 色彩频道的宽带减色系数取决于难以建模的物体范围、 反射等。 另外, 水中悬浮颗粒也存在反射现象。 因此, 大部分现有的深层基于颜色的恢复方法, 受过合成水下数据集培训, 无法很好地利用真正的水下数据。 这可以归因于合成数据无法准确代表真实条件这一事实。 为了解决这个问题, 我们使用图像转换网络, 将合成水下域的图像转换为真正的水下域, 以弥合合成与真实域之间的差距。 我们使用这种多式域适应技术, 创建一个数据集, 能够捕捉多种水下条件。 然后, 我们用我们的领域调整数据集培训一个简单而有效的CNN网络, 来进行彩色恢复。 可以在 https://github.com/nehamjain10/ TRUDGR 上访问代码和预先训练过的模型。