Zero-shot learning is a new paradigm to classify objects from classes that are not available at training time. Zero-shot learning (ZSL) methods have attracted considerable attention in recent years because of their ability to classify unseen/novel class examples. Most of the existing approaches on ZSL works when all the samples from seen classes are available to train the model, which does not suit real life. In this paper, we tackle this hindrance by developing a generative replay-based continual ZSL (GRCZSL). The proposed method endows traditional ZSL to learn from streaming data and acquire new knowledge without forgetting the previous tasks' gained experience. We handle catastrophic forgetting in GRCZSL by replaying the synthetic samples of seen classes, which have appeared in the earlier tasks. These synthetic samples are synthesized using the trained conditional variational autoencoder (VAE) over the immediate past task. Moreover, we only require the current and immediate previous VAE at any time for training and testing. The proposed GRZSL method is developed for a single-head setting of continual learning, simulating a real-world problem setting. In this setting, task identity is given during training but unavailable during testing. GRCZSL performance is evaluated on five benchmark datasets for the generalized setup of ZSL with fixed and incremental class settings of continual learning. Experimental results show that the proposed method significantly outperforms the baseline method and makes it more suitable for real-world applications.
翻译:零点学习是一种新模式,用于对培训时没有的班级的物体进行分类。 零点学习( ZSL)方法近年来由于能够对不可见/小类实例进行分类而引起相当多的关注。 ZSL的大多数现有方法都是在有见的班级的所有样本都可用于培训模型时使用的,这不符合真实生活。 在本文件中,我们通过开发基于基因的重播连续ZSL( GRCZSL) 来克服这一障碍。 拟议的方法使传统的ZSL从流数据中学习和获得新知识,同时不忘先前的任务积累的经验。 我们处理GRCZSL的灾难性遗忘,方法是重播已看到班的合成样本,这些在早期任务中出现。 这些合成样本是使用经过训练的有条件的变异自动解码(VAE)来合成的。 此外,我们只需要在任何时间开发基于基因变异常动的连续播放连续播放,同时为真实的SLSLSL开发出合适的应用方法。 在不固定的GSLA级学习过程中,在设定不固定的连续的基底阶段对任务进行测试。 在设定任务设置期间,任务设置中,将任务设置为SLSLVAF的不断测试。 在设定中,在不固定的基底的基级测试期间,将任务设置中,将进行不固定的常规的基底基底的基级进行。