Zero-shot learning (ZSL) aims to predict unseen classes whose samples have never appeared during training. One of the most effective and widely used semantic information for zero-shot image classification are attributes which are annotations for class-level visual characteristics. However, the current methods often fail to discriminate those subtle visual distinctions between images due to not only the shortage of fine-grained annotations, but also the attribute imbalance and co-occurrence. In this paper, we present a transformer-based end-to-end ZSL method named DUET, which integrates latent semantic knowledge from the pre-trained language models (PLMs) via a self-supervised multi-modal learning paradigm. Specifically, we (1) developed a cross-modal semantic grounding network to investigate the model's capability of disentangling semantic attributes from the images; (2) applied an attribute-level contrastive learning strategy to further enhance the model's discrimination on fine-grained visual characteristics against the attribute co-occurrence and imbalance; (3) proposed a multi-task learning policy for considering multi-model objectives. We find that our DUET can achieve state-of-the-art performance on three standard ZSL benchmarks and a knowledge graph equipped ZSL benchmark. Its components are effective and its predictions are interpretable.
翻译:零点学习(ZSL)的目的是预测在培训期间从未出现过样本的隐蔽班级。在零点图像分类中,最有效且广泛使用的语义信息之一是作为课堂视觉特征说明的属性。然而,目前的方法往往没有区分图像之间的这些微妙视觉区别,不仅因为缺少细微图解,而且因为属性不平衡和共发现象。在本文中,我们提出了一个基于变压器的终端到终端ZSL方法,名为DUET,它通过自上式多模式学习模式,将预先培训的语言模型(PLMs)的潜在语义学知识整合在一起。具体地说,我们(1) 开发了一个跨模式语义地面网络,调查模型从图像中脱色语义属性的能力;(2) 应用了属性级对比学习战略,以进一步加强模型在精细度视觉特征上对属性共现和失衡的区别;(3) 提出了多任务学习政策,以考虑多模范多模范多模式的多模式多模式的多模式的多模式学习目标。我们发现超模范的SLET是其标准级的Z。