Zero-shot learning (ZSL) refers to the problem of learning to classify instances from the novel classes (unseen) that are absent in the training set (seen). Most ZSL methods infer the correlation between visual features and attributes to train the classifier for unseen classes. However, such models may have a strong bias towards seen classes during training. Meta-learning has been introduced to mitigate the basis, but meta-ZSL methods are inapplicable when tasks used for training are sampled from diverse distributions. In this regard, we propose a novel Task-aligned Generative Meta-learning model for Zero-shot learning (TGMZ). TGMZ mitigates the potentially biased training and enables meta-ZSL to accommodate real-world datasets containing diverse distributions. TGMZ incorporates an attribute-conditioned task-wise distribution alignment network that projects tasks into a unified distribution to deliver an unbiased model. Our comparisons with state-of-the-art algorithms show the improvements of 2.1%, 3.0%, 2.5%, and 7.6% achieved by TGMZ on AWA1, AWA2, CUB, and aPY datasets, respectively. TGMZ also outperforms competitors by 3.6% in generalized zero-shot learning (GZSL) setting and 7.9% in our proposed fusion-ZSL setting.
翻译:零点学习(ZSL) 指的是学习将没有参加过培训的新型班级(未见)的事例分类的问题(SSL) 。 多数 ZSL 方法推断了视觉特征和属性之间的关联性, 用于培训隐蔽班的分类师。 然而, 这些模型可能强烈偏向于培训中的普通班。 已经引入了元学习来减轻基础, 但是当培训任务从不同分布中抽样使用时, 元ZSL方法是不适用的。 在这方面, 我们建议为零点学习(TGMZ) 建立一个新的、 与任务一致的创创创创创创创创创创创创创创创创创创模式(TGMZ), 减少潜在偏差的培训, 使Me- ZSL能够容纳包含不同分布的真实世界数据集。 TGMZ( CUB) 和 CMSLM- CSLM 的GMM 和 CSALM Z 的GM 和GRVD 的GM, 和GMZ 的GMD 和GMSD的GM 。