High dynamic range (HDR) videos provide a more visually realistic experience than the standard low dynamic range (LDR) videos. Despite having significant progress in HDR imaging, it is still a challenging task to capture high-quality HDR video with a conventional off-the-shelf camera. Existing approaches rely entirely on using dense optical flow between the neighboring LDR sequences to reconstruct an HDR frame. However, they lead to inconsistencies in color and exposure over time when applied to alternating exposures with noisy frames. In this paper, we propose an end-to-end GAN-based framework for HDR video reconstruction from LDR sequences with alternating exposures. We first extract clean LDR frames from noisy LDR video with alternating exposures with a denoising network trained in a self-supervised setting. Using optical flow, we then align the neighboring alternating-exposure frames to a reference frame and then reconstruct high-quality HDR frames in a complete adversarial setting. To further improve the robustness and quality of generated frames, we incorporate temporal stability-based regularization term along with content and style-based losses in the cost function during the training procedure. Experimental results demonstrate that our framework achieves state-of-the-art performance and generates superior quality HDR frames of a video over the existing methods.
翻译:与标准的低动态范围视频相比,高动态(HDR)视频提供了比标准低动态范围视频(LDR)更视觉更现实的实际经验。尽管在《人类发展报告》成像方面取得了显著进步,但获取高质量的《人类发展报告》视频仍是一项具有挑战性的任务,即用常规的现成相机获取高质量的《人类发展报告》视频。现有方法完全依赖于使用相邻的LDR序列之间密集的光学流来重建《人类发展报告》框架。然而,当应用这些视频来交替地将暴露与噪音框架交替时,这些视频在颜色和暴露方面产生了不一致。在本文中,我们提议从LDR序列和交替接触中进行基于《人类发展报告》视频重建的终端到终端的GAN框架。我们首先从噪音的LDR视频中提取清洁的LDR框架,与经过自我监督环境下培训的分解网络交替接触。然后,我们利用光学流将相交替的图像框架与参考框架相匹配,然后在完全的对抗环境下重建高质量的《人类发展报告》框架。为了进一步改善和质量,我们在培训过程中将基于内容和风格的损失纳入基于时间的常规的规范术语。实验结果,展示我们的框架,从而产生超越了业绩。