Current contrastive learning frameworks focus on leveraging a single supervisory signal to learn representations, which limits the efficacy on unseen data and downstream tasks. In this paper, we present a hierarchical multi-label representation learning framework that can leverage all available labels and preserve the hierarchical relationship between classes. We introduce novel hierarchy preserving losses, which jointly apply a hierarchical penalty to the contrastive loss, and enforce the hierarchy constraint. The loss function is data driven and automatically adapts to arbitrary multi-label structures. Experiments on several datasets show that our relationship-preserving embedding performs well on a variety of tasks and outperform the baseline supervised and self-supervised approaches. Code is available at https://github.com/salesforce/hierarchicalContrastiveLearning.
翻译:目前对比式学习框架侧重于利用单一监督信号来学习表现,这限制了无法见的数据和下游任务的效力。在本文中,我们提出了一个等级化多标签代表学习框架,能够利用所有现有标签并保持各等级之间的等级关系。我们引入了新的等级结构,以保留损失,对对比性损失共同适用等级惩罚,并强制执行等级限制。损失功能是数据驱动的,并自动适应任意性的多标签结构。几个数据集的实验显示,我们的关系保护嵌入在各种任务上表现良好,超越了受监督和自我监督的基线方法。代码可在https://github.com/salesforce/hierarchical ContrastrativeLesting上查阅。