Learning from data sequentially arriving, possibly in a non i.i.d. way, with changing task distribution over time is called continual learning. Much of the work thus far in continual learning focuses on supervised learning and some recent works on unsupervised learning. In many domains, each task contains a mix of labelled (typically very few) and unlabelled (typically plenty) training examples, which necessitates a semi-supervised learning approach. To address this in a continual learning setting, we propose a framework for semi-supervised continual learning called Meta-Consolidation for Continual Semi-Supervised Learning (MCSSL). Our framework has a hypernetwork that learns the meta-distribution that generates the weights of a semi-supervised auxiliary classifier generative adversarial network $(\textit{Semi-ACGAN})$ as the base network. We consolidate the knowledge of sequential tasks in the hypernetwork, and the base network learns the semi-supervised learning task. Further, we present $\textit{Semi-Split CIFAR-10}$, a new benchmark for continual semi-supervised learning, obtained by modifying the $\textit{Split CIFAR-10}$ dataset, in which the tasks with labelled and unlabelled data arrive sequentially. Our proposed model yields significant improvements in the continual semi-supervised learning setting. We compare the performance of several existing continual learning approaches on the proposed continual semi-supervised learning benchmark of the Semi-Split CIFAR-10 dataset.
翻译:从按顺序获取的数据中学习,可能采用非i.d.d.的方式,随着时间的变化任务分布的变化而不断学习。迄今为止,在持续学习方面的大量工作都以持续学习为名。在继续学习方面,大部分工作的重点是监督学习,最近的一些工作是不受监督的学习。在许多领域,每个任务都包含有标签(通常很少)和未标签(通常多)的培训实例,这需要一种半监督的学习方法。为了在不断学习的环境中解决这一问题,我们提议了一个半监督的连续学习框架,称为“连续半超学习的元-综合整合”(MCSSL)。我们的框架有一个超网络,学习元分配,产生半监督的辅助分类式对抗网络的重量(通常很少)和未标签的(通常多)。 我们综合了超网络中顺序任务的知识,而基础网络则学习半超超超标准学习任务。我们目前不断更新的S-r-r-10美元,一个新的超标准分类基准,用我们不断更新的S-ral-ral-ral的升级数据库,用我们不断升级的连续学习的基数数据升级数据库,以不断更新我们不断学习的S-r-r-r-rxxxxxx的升级的升级的数据。