In Self-Supervised Learning (SSL), various pretext tasks are designed for learning feature representations through contrastive loss. However, previous studies have shown that this loss is less tolerant to semantically similar samples due to the inherent defect of instance discrimination objectives, which may harm the quality of learned feature embeddings used in downstream tasks. To improve the discriminative ability of feature embeddings in SSL, we propose a new loss function called Angular Contrastive Loss (ACL), a linear combination of angular margin and contrastive loss. ACL improves contrastive learning by explicitly adding an angular margin between positive and negative augmented pairs in SSL. Experimental results show that using ACL for both supervised and unsupervised learning significantly improves performance. We validated our new loss function using the FSDnoisy18k dataset, where we achieved 73.6% and 77.1% accuracy in sound event classification using supervised and self-supervised learning, respectively.
翻译:在自我监督学习(SSL)中,设计了各种借口任务,通过对比性损失来学习特征表征;然而,以往的研究显示,由于实例歧视目标固有的缺陷,这种损失对语义上相似的样本不那么宽容,这种缺陷可能损害下游任务中使用的学习特征嵌入的质量。为了提高在 SL 中嵌入特征的区别性能力,我们提议了一个新的损失函数,称为“角对立损失”(ACL),这是角边边边边差和对比性损失的线性组合。ACL通过明确增加SL 中正对对面和负对面增强的对面之间的角差幅,改进了对比性学习。实验结果表明,在监督性和不受监督的学习中使用ACL(ACL)大大改进了绩效。 我们使用FSDnoisy18k数据集验证了我们新的损失功能,在那里,我们分别通过监督和自我监督的自我监督的学习,在声音事件分类中实现了73.6%和77.1%。