The label-embedded dictionary learning (DL) algorithms generate influential dictionaries by introducing discriminative information. However, there exists a limitation: All the label-embedded DL methods rely on the labels due that this way merely achieves ideal performances in supervised learning. While in semi-supervised and unsupervised learning, it is no longer sufficient to be effective. Inspired by the concept of self-supervised learning (e.g., setting the pretext task to generate a universal model for the downstream task), we propose a Self-Supervised Dictionary Learning (SSDL) framework to address this challenge. Specifically, we first design a $p$-Laplacian Attention Hypergraph Learning (pAHL) block as the pretext task to generate pseudo soft labels for DL. Then, we adopt the pseudo labels to train a dictionary from a primary label-embedded DL method. We evaluate our SSDL on two human activity recognition datasets. The comparison results with other state-of-the-art methods have demonstrated the efficiency of SSDL.
翻译:标签包含字典学习( DL) 算法通过引入歧视性信息而产生有影响力的词典。 但是,存在一个局限性:所有标签包含的词典学习方法都依赖于标签,因此只能实现监督学习的理想表现。 在半监督和无监督的学习中,它不再足够有效。受自我监督学习概念的启发(例如,设置为下游任务创建通用模型的借口任务),我们提议了一个自译自审的词典学习框架来应对这一挑战。具体地说,我们首先设计了一个$p$-Laplaceian 注意力高光谱学习(pAHL)块,作为为DL生成假软标签的借口任务。 然后,我们采用假标签来从基本标签包含 DL 方法中培养字典。 我们在两个人类活动识别数据集上评估了我们的SDDL。 与其他状态技术方法的比较结果展示了SSDL 的效率 。