Clustering of hyperspectral images is a fundamental but challenging task. The recent development of hyperspectral image clustering has evolved from shallow models to deep and achieved promising results in many benchmark datasets. However, their poor scalability, robustness, and generalization ability, mainly resulting from their offline clustering scenarios, greatly limit their application to large-scale hyperspectral data. To circumvent these problems, we present a scalable deep online clustering model, named Spectral-Spatial Contrastive Clustering (SSCC), based on self-supervised learning. Specifically, we exploit a symmetric twin neural network comprised of a projection head with a dimensionality of the cluster number to conduct dual contrastive learning from a spectral-spatial augmentation pool. We define the objective function by implicitly encouraging within-cluster similarity and reducing between-cluster redundancy. The resulting approach is trained in an end-to-end fashion by batch-wise optimization, making it robust in large-scale data and resulting in good generalization ability for unseen data. Extensive experiments on three hyperspectral image benchmarks demonstrate the effectiveness of our approach and show that we advance the state-of-the-art approaches by large margins.
翻译:超光谱图像群集是一项根本性但具有挑战性的任务。最近超光谱图像群集的发展从浅度模型演变为深度,在许多基准数据集中取得了令人乐观的成果。然而,超光谱图像群集的伸缩性差、稳健性和概括性差,主要源于离线群集假设,大大限制了超光谱图像群集的应用。为了绕过这些问题,我们提出了一个可伸缩的深度在线群集模型,名为光谱-空间对立群集(SSCC),以自我监督的学习为基础。具体地说,我们利用一个由具有集成尺寸的投影头组成的对称对称双神经网,从集成群群群群中进行双重对比学习。我们界定了目标功能,暗含鼓励集组内相似性和减少集束间冗余。因此,我们通过批量优化进行端对端培训,使其在大型数据中变得稳健,并导致对看不见数据进行良好的概括能力。关于三个超光谱图像基准的广泛实验显示了我们方法的有效性,并表明我们通过大距离推进状态方法。