Multispectral image fusion is a fundamental problem of image processing and remote sensing. This problem is addressed by both classic and deep learning approaches. This paper is focused on the classic solutions that can work in real-time systems and introduces a new novel approach to this group of works. The proposed method carries out multispectral image fusion based on the content of the fused images. Furthermore, it relies on an analysis of the level of information of segmented superpixels in the fused inputs. Specifically, the proposed method addresses the task of visible color RGB to Near-Infrared (NIR) fusion. The RGB image captures the color of the scene while the NIR channel captures details and sees beyond haze and clouds. Since each channel senses different information of the scene, their multispectral fusion is challenging and interesting. Therefore, the proposed method is designed to produce a fusion that contains the relevant content of each spectra. The experiments of this manuscript show that the proposed method is visually informative with respect to other classic fusion methods. Moreover, it can be run fastly on embedded devices without heavy computation requirements.
翻译:多光谱图像聚合是图像处理和遥感的一个基本问题。 这个问题通过传统和深层次的学习方法加以解决。 本文侧重于在实时系统中可以发挥作用的经典解决方案,并对这组作品采用新的新方法。 拟议的方法根据引信图像的内容进行多光谱图像聚合。 此外, 它依赖对集成投入中分离的超级像素的信息水平的分析。 具体地说, 拟议的方法涉及可见的颜色 RGB 到近红外( NIR) 融合的任务。 RGB 图像捕捉到场景的颜色, 而 NIR 频道捕捉到细节并看到烟雾和云之外。 由于每个频道都感知到场景的不同信息, 它们的多光谱融合具有挑战性和意义。 因此, 拟议的方法旨在产生一个包含每个光谱中相关内容的聚变。 此手稿的实验显示, 拟议的方法与其他典型的聚变方法相比, 具有视觉信息。 此外, 它可以快速运行在嵌入装置上, 而不进行重算要求 。