The prime goal of digital imaging techniques is to reproduce the realistic appearance of a scene. Low Dynamic Range (LDR) cameras are incapable of representing the wide dynamic range of the real-world scene. The captured images turn out to be either too dark (underexposed) or too bright (overexposed). Specifically, saturation in overexposed regions makes the task of reconstructing a High Dynamic Range (HDR) image from single LDR image challenging. In this paper, we propose a deep learning based approach to recover details in the saturated areas while reconstructing the HDR image. We formulate this problem as an image-to-image (I2I) translation task. To this end, we present a novel conditional GAN (cGAN) based framework trained in an end-to-end fashion over the HDR-REAL and HDR-SYNTH datasets. Our framework uses an overexposed mask obtained from a pre-trained segmentation model to facilitate the hallucination task of adding details in the saturated regions. We demonstrate the effectiveness of the proposed method by performing an extensive quantitative and qualitative comparison with several state-of-the-art single-image HDR reconstruction techniques.
翻译:数字成像技术的首要目标是复制一个场景的现实外观。 低动态区域( LDR) 相机无法代表真实世界场景的广泛动态范围。 所拍摄的图像要么太暗( 曝光不足), 要么太亮( 过度曝光 ) 。 具体地说, 过度曝光地区的饱和地区( HDR) 图像饱和地区( HDR) 的图像饱和地区( HDR) 具有挑战性。 在本文中, 我们提出一种深层次的学习方法, 以便在饱和地区恢复细节。 我们将此问题作为图像到图像( I2I) 的翻译任务来描述。 为此, 我们提出了一个新的有条件的GAN (cGAN) 框架, 对人类发展报告- REAL 和 HRDHR- SYNTH 数据集进行端到端培训。 我们的框架使用一个从经过预先训练的分解模型获得的高动态区域( HDRDR) 图像, 来帮助在饱和地区添加细节的错觉测任务。 我们展示了拟议方法的有效性, 与数个状态的单项元重建技术进行广泛的定量和定性比较。