Deep convolutional neural networks (DCNNs) have aided high dynamic range (HDR) imaging recently and have received a lot of attention. The quality of DCNN-generated HDR images has overperformed the traditional counterparts. However, DCNNs are prone to be computationally intensive and power-hungry, and hence cannot be implemented on various embedded computing platforms with limited power and hardware resources. Embedded systems have a huge market, and utilizing DCNNs' powerful functionality into them will further reduce human intervention. To address the challenge, we propose LightFuse, a lightweight CNN-based algorithm for extreme dual-exposure image fusion, which achieves better functionality than a conventional DCNN and can be deployed in embedded systems. Two sub-networks are utilized: a GlobalNet (G) and a DetailNet (D). The goal of G is to learn the global illumination information on the spatial dimension, whereas D aims to enhance local details on the channel dimension. Both G and D are based solely on depthwise convolution (D_Conv) and pointwise convolution (P_Conv) to reduce required parameters and computations. Experimental results show that this proposed technique could generate HDR images in extremely exposed regions with sufficient details to be legible. Our model outperforms other state-of-the-art approaches in peak signal-to-noise ratio (PSNR) score by 0.9 to 8.7 while achieving 16.7 to 306.2 times parameter reduction.
翻译:深层神经网络(DCNN)最近帮助了高动态范围(HDR)成像(HDR),受到了很多关注。DCNN生成的《人类发展报告》图像的质量超过了传统对口系统。然而,DCNN生成的图像的质量超过了传统DCNNN的图像。但是,DCNN很容易被计算为密集和强力饥饿,因此无法在电源和硬件资源有限的各种嵌入计算机平台上实施。嵌入的系统具有巨大的市场,利用DCNNND的强大功能将进一步减少人类的干预。为了应对这一挑战,我们建议 LightFuse,即一个基于CNN的超度两层接触图像聚合的轻量级CNN算法,其功能优于传统的DCNNNN,可部署在嵌入系统中。使用了两个子网络:GlobalNet(G)和DretainNet(D)。G的目的是学习关于空间层面的全球污染信息,而D的目的是加强频道层面的本地细节。Gsisision 和D完全基于深度变换(Don) 和点对调调(Ponvil),以降低所需的参数比重(Ponvill-Con),从而将所需的参数缩缩缩缩缩成所需的参数到极小于16个方向的模型,然后在极速化的模型的模型的模型上显示的模型的模型,从而显示到极小的模型的模型的模型的模型的模型。在极的模型的模型,在极的模型的模型上显示的模型,可以显示的模型,在最短的模型到最短的模型。