Continual learning (CL) is widely regarded as crucial challenge for lifelong AI. However, existing CL benchmarks, e.g. Permuted-MNIST and Split-CIFAR, make use of artificial temporal variation and do not align with or generalize to the real-world. In this paper, we introduce CLEAR, the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts in the real world that spans a decade (2004-2014). We build CLEAR from existing large-scale image collections (YFCC100M) through a novel and scalable low-cost approach to visio-linguistic dataset curation. Our pipeline makes use of pretrained vision-language models (e.g. CLIP) to interactively build labeled datasets, which are further validated with crowd-sourcing to remove errors and even inappropriate images (hidden in original YFCC100M). The major strength of CLEAR over prior CL benchmarks is the smooth temporal evolution of visual concepts with real-world imagery, including both high-quality labeled data along with abundant unlabeled samples per time period for continual semi-supervised learning. We find that a simple unsupervised pre-training step can already boost state-of-the-art CL algorithms that only utilize fully-supervised data. Our analysis also reveals that mainstream CL evaluation protocols that train and test on iid data artificially inflate performance of CL system. To address this, we propose novel "streaming" protocols for CL that always test on the (near) future. Interestingly, streaming protocols (a) can simplify dataset curation since today's testset can be repurposed for tomorrow's trainset and (b) can produce more generalizable models with more accurate estimates of performance since all labeled data from each time-period is used for both training and testing (unlike classic iid train-test splits).
翻译:持续学习( CL) 被广泛视为终身人工智能的关键挑战。 但是, 现有的 CL 基准, 如 Permoded- MNIST 和 Slip- CIFAR, 使用人工时间变异, 与真实世界不相适应或笼统化。 在本文中, 我们引入 CLEAR, 这是第一个连续图像分类基准数据集, 其视觉概念在过去十年(2004- 2014年) 的真实世界中自然时间演变。 我们从现有大型图像收集( YFCCC100M ) 中建立 CLEAR 。 我们利用现有的 CLIP, 利用现有的视觉语言模型模型( 如 CLIP ) 来互动构建标签化数据集 。 我们的CLIL 测试( 原始的YFCCC100M ) 与以前的CLL 基准相比, 主要的强度是视觉概念和现实世界图像的平稳演变, 包括高品质标签数据, 以及大量未经标记的 Cretaild road road room 校正 校正 。 我们的C- LL 测试之后, 也只能在持续的C- dal- dal 测试中, 数据中, 。