Recently, zero-shot learning (ZSL) emerged as an exciting topic and attracted a lot of attention. ZSL aims to classify unseen classes by transferring the knowledge from seen classes to unseen classes based on the class description. Despite showing promising performance, ZSL approaches assume that the training samples from all seen classes are available during the training, which is practically not feasible. To address this issue, we propose a more generalized and practical setup for ZSL, i.e., continual ZSL (CZSL), where classes arrive sequentially in the form of a task and it actively learns from the changing environment by leveraging the past experience. Further, to enhance the reliability, we develop CZSL for a single head continual learning setting where task identity is revealed during the training process but not during the testing. To avoid catastrophic forgetting and intransigence, we use knowledge distillation and storing and replay the few samples from previous tasks using a small episodic memory. We develop baselines and evaluate generalized CZSL on five ZSL benchmark datasets for two different settings of continual learning: with and without class incremental. Moreover, CZSL is developed for two types of variational autoencoders, which generates two types of features for classification: (i) generated features at output space and (ii) generated discriminative features at the latent space. The experimental results clearly indicate the single head CZSL is more generalizable and suitable for practical applications.
翻译:最近,零光学习(ZSL)作为一个令人兴奋的话题出现,引起了很多关注。 ZSL的目的是通过根据课堂描述将知识从可见的班级传授给不为人知的班级,对隐蔽的班级进行分类。尽管表现良好,但ZSL方法假定,所有见的班级的培训样本在培训期间都有,这实际上不可行。为了解决这一问题,我们提议为ZSL提供一个更加普遍和实用的设置,即连续的ZSL(CZSL),该班级以任务的形式相继到达,并通过利用过去的经验积极从不断变化的环境中学习。此外,为了提高可靠性,我们开发CZSL, 用于一个单一的连续学习设置,在培训过程中显示任务身份,而不是在测试期间显示任务身份。为了避免灾难性的忘却和不妥协,我们使用一个小缩略记忆,我们提议为以前的任务中的少数样样本,即连续的ZSLSL(C),我们为两个不同的连续学习环境设置了一般的CSLSL基准数据集,并进行评估。此外,我们开发CSLSL,用于一个单独的连续和不递增级,在两种类型的空间上生成了两种类型的高级的模变换的SL(C),在两个不同的空间模型,在两个类型上生成的样的模变换式的SL(C),用于两个不同的空间的模制的模型的模制的模制的模制,在两个不同的空间的模。