Reconstructing ghosting-free high dynamic range (HDR) images of dynamic scenes from a set of multi-exposure images is a challenging task, especially with large object motion and occlusions, leading to visible artifacts using existing methods. To address this problem, we propose a deep network that tries to learn multi-scale feature flow guided by the regularized loss. It first extracts multi-scale features and then aligns features from non-reference images. After alignment, we use residual channel attention blocks to merge the features from different images. Extensive qualitative and quantitative comparisons show that our approach achieves state-of-the-art performance and produces excellent results where color artifacts and geometric distortions are significantly reduced.
翻译:从一组多接触图像中重建动态场景的无幽灵高动态范围图像是一项具有挑战性的任务,特别是大型物体运动和隔离,从而利用现有方法产生可见的文物。为解决这一问题,我们提议建立一个深层次网络,尝试学习由正常损失引导的多尺度特征流。首先,它提取多尺度的特征,然后将非参考图像的特征统一起来。在对齐之后,我们使用残余通道关注块将不同图像的特征融合起来。广泛的定性和定量比较表明,我们的方法取得了最新性能,并产生了优异的结果,在彩色文物和几何扭曲方面大大减少了。