We study Online Continual Learning with missing labels and propose SemiCon, a new contrastive loss designed for partly labeled data. We demonstrate its efficiency by devising a memory-based method trained on an unlabeled data stream, where every data added to memory is labeled using an oracle. Our approach outperforms existing semi-supervised methods when few labels are available, and obtain similar results to state-of-the-art supervised methods while using only 2.6% of labels on Split-CIFAR10 and 10% of labels on Split-CIFAR100.
翻译:我们用缺失的标签研究在线持续学习,并提议SemyCon,这是为部分标签数据设计的新的对比性损失。我们设计了一种以记忆为基础的方法,在无标签数据流上对内存添加的每条数据都用甲骨文贴上标签,从而展示了该方法的效率。 我们的方法优于现有半监督方法,但很少贴上标签,并取得了与最新监管方法相似的结果,同时只使用Split-CIFAR10上2.6%的标签和Split-CIFAR100上10%的标签。