Deep convolutional neural networks (DCNN) aided high dynamic range (HDR) imaging recently received a lot of attention. The quality of DCNN generated HDR images have overperformed the traditional counterparts. However, DCNN is prone to be computationally intensive and power-hungry. To address the challenge, we propose LightFuse, a light-weight CNN-based algorithm for extreme dual-exposure image fusion, which can be implemented on various embedded computing platforms with limited power and hardware resources. Two sub-networks are utilized: a GlobalNet (G) and a DetailNet (D). The goal of G is to learn the global illumination information on the spatial dimension, whereas D aims to enhance local details on the channel dimension. Both G and D are based solely on depthwise convolution (D Conv) and pointwise convolution (P Conv) to reduce required parameters and computations. Experimental results display that the proposed technique could generate HDR images with plausible details in extremely exposed regions. Our PSNR score exceeds the other state-of-the-art approaches by 1.2 to 1.6 times and achieves 1.4 to 20 times FLOP and parameter reduction compared with others.
翻译:深相神经网络(DCNN)帮助了高动态范围(HDR)成像,最近引起了人们的极大关注。DCNN生成的DHD图像的质量超过了传统对口单位的成绩。然而,DCNNN很容易在计算上变得密集和强力饥饿。为了应对这一挑战,我们提议了光Fuse,即一个基于光量CNN的算法,用于极强的两发性图像聚合,可以在电力和硬件资源有限的各种嵌入计算机平台上实施。使用了两个子网络:一个全球网络(G)和一个详细网(D)。G的目标是了解全球空间层面的照明信息,而D的目的是加强频道层面的本地细节。G和D都完全基于深度演进(D Convion)和点进化(PConv),以减少所需的参数和计算。实验结果显示,拟议的技术可以在极暴露区域产生具有合理细节的人类发展报告图像。我们的PSNR的分数比其他状态方法高出1.2至1.6倍,并实现了1.4至20倍的FLOP和参数。