The performance of generative zero-shot methods mainly depends on the quality of generated features and how well the model facilitates knowledge transfer between visual and semantic domains. The quality of generated features is a direct consequence of the ability of the model to capture the several modes of the underlying data distribution. To address these issues, we propose a new two-level joint maximization idea to augment the generative network with an inference network during training which helps our model capture the several modes of the data and generate features that better represent the underlying data distribution. This provides strong cross-modal interaction for effective transfer of knowledge between visual and semantic domains. Furthermore, existing methods train the zero-shot classifier either on generate synthetic image features or latent embeddings produced by leveraging representation learning. In this work, we unify these paradigms into a single model which in addition to synthesizing image features, also utilizes the representation learning capabilities of the inference network to provide discriminative features for the final zero-shot recognition task. We evaluate our approach on four benchmark datasets i.e. CUB, FLO, AWA1 and AWA2 against several state-of-the-art methods, and show its performance. We also perform ablation studies to analyze and understand our method more carefully for the Generalized Zero-shot Learning task.
翻译:基因化零光方法的性能主要取决于生成特性的质量以及模型如何促进视觉和语义领域之间的知识转让。生成特性的质量是模型能够捕捉基本数据分配的几种模式的直接结果。为了解决这些问题,我们提出了一个新的两级联合最大化设想,以强化基因化网络,在培训期间建立一个推论网络,帮助模型捕捉数据的若干模式,并产生更好地代表基本数据分布的特征。这为视觉和语义领域之间的知识有效转让提供了强有力的跨模式互动。此外,现有方法对零光分类师进行了培训,要么是为了生成合成图像特征,要么是利用代表性学习生成的潜在嵌入。在这项工作中,我们将这些范例统一成一个单一的模式,除了综合图像特征外,还利用推断网络的代表性学习能力为最终零光识别任务提供具有歧视性的特征。我们评估了四个基准数据集的方法,即CUB、FLO、AWA1和AWA2, 对照若干州级和州级分析方法,以进行我们更精确的学习。