This paper investigates the problem of reconstructing hyperspectral (HS) images from single RGB images captured by commercial cameras, \textbf{without} using paired HS and RGB images during training. To tackle this challenge, we propose a new lightweight and end-to-end learning-based framework. Specifically, on the basis of the intrinsic imaging degradation model of RGB images from HS images, we progressively spread the differences between input RGB images and re-projected RGB images from recovered HS images via effective unsupervised camera spectral response function estimation. To enable the learning without paired ground-truth HS images as supervision, we adopt the adversarial learning manner and boost it with a simple yet effective $\mathcal{L}_1$ gradient clipping scheme. Besides, we embed the semantic information of input RGB images to locally regularize the unsupervised learning, which is expected to promote pixels with identical semantics to have consistent spectral signatures. In addition to conducting quantitative experiments over two widely-used datasets for HS image reconstruction from synthetic RGB images, we also evaluate our method by applying recovered HS images from real RGB images to HS-based visual tracking. Extensive results show that our method significantly outperforms state-of-the-art unsupervised methods and even exceeds the latest supervised method under some settings. The source code is public available at https://github.com/zbzhzhy/Unsupervised-Spectral-Reconstruction.
翻译:本文调查了从商业相机、\ textbf{ 不使用配对 HS 和 RGB 图像拍摄的单个 RGB 图像中重建超光谱图像的问题。 为了应对这一挑战, 我们提出了一个新的基于学习的轻量和端到端的基于学习的框架。 具体地说, 我们根据HS 图像中输入的 RGB 图像的内在成像降解模型, 通过有效且不受监督的相机光谱响应功能估计, 将输入的 RGB 图像和重新从回收的 HS 图像中重新投影的 RGB 图像之间的差异逐渐扩大。 为了在不配对的地面图象中学习, 我们采用了对抗性学习的方式, 并以简单且有效的$\ mathcal{L ⁇ 1$+端到端端端端的梯度剪切方案。 此外, 我们嵌入输入的 RGB 图像的语义信息信息信息信息, 以本地端码/ 光谱系统 下的一些可使用的数据架进行定量实验。 我们还通过在合成 RGB 常规图像中进行回收的 HGBS 快速跟踪方法, 展示了我们最新的 HGBS 。