Hyperspectral images, which record the electromagnetic spectrum for a pixel in the image of a scene, often store hundreds of channels per pixel and contain an order of magnitude more information than a typical similarly-sized color image. Consequently, concomitant with the decreasing cost of capturing these images, there is a need to develop efficient techniques for storing, transmitting, and analyzing hyperspectral images. This paper develops a method for hyperspectral image compression using implicit neural representations where a multilayer perceptron network $\Phi_\theta$ with sinusoidal activation functions ``learns'' to map pixel locations to pixel intensities for a given hyperspectral image $I$. $\Phi_\theta$ thus acts as a compressed encoding of this image. The original image is reconstructed by evaluating $\Phi_\theta$ at each pixel location. We have evaluated our method on four benchmarks -- Indian Pines, Cuprite, Pavia University, and Jasper Ridge -- and we show the proposed method achieves better compression than JPEG, JPEG2000, and PCA-DCT at low bitrates.
翻译:超光谱图像记录了像素图像中的电磁频谱,通常每个像素存储数百个频道,并包含比典型的类似彩色图像更多的数量级信息。 因此,随着捕获这些图像的成本不断下降,有必要开发高效的存储、传输和分析超光谱图像的技术。 本文开发了一种使用隐性神经表征的超光谱图像压缩方法, 多层透视网络($\ Phi ⁇ theta$), 并带有正弦振动功能的“ learns” 将像素位置映射成超光谱图像的像素强度。 因此, $\ Phi ⁇ theta$ 作为该图像的压缩编码。 原始图像通过对每个像素位置的 $\ Phi ⁇ theta$进行评估来重建。 我们用四种基准评估了我们的方法 -- 印度派恩、 丘特、 帕维亚大学 和 贾斯利 Ridge -- 我们展示了拟议方法比 JPEG、 JEG 2000 和低比拉特的五氯苯-DCT 得到了更好的压缩。