Focusing on discriminative zero-shot learning, in this work we introduce a novel mechanism that dynamically augments during training the set of seen classes to produce additional fictitious classes. These fictitious classes diminish the model's tendency to fixate during training on attribute correlations that appear in the training set but will not appear in newly exposed classes. The proposed model is tested within the two formulations of the zero-shot learning framework; namely, generalized zero-shot learning (GZSL) and classical zero-shot learning (CZSL). Our model improves the state-of-the-art performance on the CUB dataset and reaches comparable results on the other common datasets, AWA2 and SUN. We investigate the strengths and weaknesses of our method, including the effects of catastrophic forgetting when training an end-to-end zero-shot model.
翻译:在这项工作中,我们引入了一种新的机制,在培训一组已见班级时,能积极增加已见班级,以产生更多的虚构班级;这些虚构班级减少了模型在培训期间固守训练组中出现但新暴露班级不会出现的属性相关性的倾向;提议的模型在零点学习框架的两个公式中进行了测试;即普遍零点学习(GZSL)和传统零点学习(CZSL)。我们的模型改进了CUB数据集的最新性能,并在其他共同数据集(AW2和SUN)上取得了可比结果。我们调查了我们方法的优缺点,包括培训端到端零点模型时灾难性的遗忘的影响。