In this paper, we propose a method to generate Bayer pattern images by Generative adversarial network (GANs). It is shown theoretically that using the transformed data in GANs training is able to improve the generator learning of the original data distribution, owing to the invariant of Jensen Shannon(JS) divergence between two distributions under invertible and differentiable transformation. The Bayer pattern images can be generated by configuring the transformation as demosaicing, by converting the existing standard color datasets to Bayer domain, the proposed method is promising in the applications such as to find the optimal ISP configuration for computer vision tasks, in the in sensor or near sensor computing, even in photography. Experiments show that the images generated by our proposed method outperform the original Pix2PixHD model in FID score, PSNR, and SSIM, and the training process is more stable. For the situation similar to in sensor or near sensor computing for object detection, by using our proposed method, the model performance can be improved without the modification to the image sensor.
翻译:在本文中,我们提出了一个通过创用对抗网络(GANs)生成Bayer模式图像的方法。从理论上看,使用GANs培训中经过改造的数据能够改进原始数据分布的生成者学习,因为Jensen Shannon(JS)在不可辨别和可辨别变异的两种分布上存在差异。Bayer模式图像可以通过将现有标准颜色数据集转换为Bayer域将现有标准颜色数据集转换为Dism 生成。在应用中,如在传感器或近传感器计算中找到计算机视觉任务的最佳ISP配置,甚至在摄影中也是如此。实验显示,我们拟议方法生成的图像超越了FID、PSNR和SSIM等分中的原Pix2PixHD模型,而培训过程则更为稳定。对于与用于物体探测的传感器或近传感器计算类似的情况,使用我们提议的方法,模型的性能可以在不修改图像传感器的情况下得到改进。