The existing continual learning methods are mainly focused on fully-supervised scenarios and are still not able to take advantage of unlabeled data available in the environment. Some recent works tried to investigate semi-supervised continual learning (SSCL) settings in which the unlabeled data are available, but it is only from the same distribution as the labeled data. This assumption is still not general enough for real-world applications and restricts the utilization of unsupervised data. In this work, we introduce Open-Set Semi-Supervised Continual Learning (OSSCL), a more realistic semi-supervised continual learning setting in which out-of-distribution (OoD) unlabeled samples in the environment are assumed to coexist with the in-distribution ones. Under this configuration, we present a model with two distinct parts: (i) the reference network captures general-purpose and task-agnostic knowledge in the environment by using a broad spectrum of unlabeled samples, (ii) the learner network is designed to learn task-specific representations by exploiting supervised samples. The reference model both provides a pivotal representation space and also segregates unlabeled data to exploit them more efficiently. By performing a diverse range of experiments, we show the superior performance of our model compared with other competitors and prove the effectiveness of each component of the proposed model.
翻译:现有的持续学习方法主要侧重于完全监督的假设情景,仍然无法利用环境中现有的无标签数据。最近的一些工作试图调查半监督的连续学习(SSCL)设置,在这些设置中,可获得无标签数据,但仅来自与标签数据相同的分布。这一假设对于现实世界应用来说仍然不够普遍,限制了无监督数据的使用。在这项工作中,我们引入了开放的Set半超常持续学习(OSSCL),这是一个更现实的半监督的半监督的持续学习设置,在这种设置中,在环境中,分配之外的无标签样本假定与分配中的样本共存。在这种配置下,我们提出了一个模式,有两个不同的部分:(一) 参考网络能够捕捉到一般用途和任务无监督的数据,并限制使用广泛的无标签样本,(二) 学习者网络的目的是通过利用受监督的样本来学习具体任务的表现。参考模型既提供了关键的空间分配(OOD)以外的无标签样本,又假定与分配中的样本共存。在这个配置下,我们提出了一个模型,同时展示了我们竞争者们的每个测试范围。