We introduce the "Incremental Implicitly-Refined Classi-fication (IIRC)" setup, an extension to the class incremental learning setup where the incoming batches of classes have two granularity levels. i.e., each sample could have a high-level (coarse) label like "bear" and a low-level (fine) label like "polar bear". Only one label is provided at a time, and the model has to figure out the other label if it has already learnfed it. This setup is more aligned with real-life scenarios, where a learner usually interacts with the same family of entities multiple times, discovers more granularity about them, while still trying not to forget previous knowledge. Moreover, this setup enables evaluating models for some important lifelong learning challenges that cannot be easily addressed under the existing setups. These challenges can be motivated by the example "if a model was trained on the class bear in one task and on polar bear in another task, will it forget the concept of bear, will it rightfully infer that a polar bear is still a bear? and will it wrongfully associate the label of polar bear to other breeds of bear?". We develop a standardized benchmark that enables evaluating models on the IIRC setup. We evaluate several state-of-the-art lifelong learning algorithms and highlight their strengths and limitations. For example, distillation-based methods perform relatively well but are prone to incorrectly predicting too many labels per image. We hope that the proposed setup, along with the benchmark, would provide a meaningful problem setting to the practitioners
翻译:我们引入了“ 隐含隐含的精密分类( IIRC) ” 设置, 扩展了班级递增学习设置, 即进取的班级通常与同一实体的家族多次互动, 发现它们更多的颗粒性, 但仍试图不忘记先前的知识。 也就是说, 每个样本可以有一个高层次( 粗皮) 标签, 比如“ 熊 ”, 低层次( 平底) 标签, 比如“ 北极熊 ” 。 只有一个标签是一次性提供的, 如果模型已经学习过, 模型必须找出另一个标签。 这个模型更适合现实生活情景, 一个学习者通常与同一个实体的家族多次互动, 发现它们更多的颗粒性, 但仍然试图忘记先前的知识 。 此外, 这个设置可以评估一些重要的终身学习挑战的模型, 而在现有的设置时无法轻易解决 。 这些挑战的动因这个例子“ 如果一个模型在课堂上被训练过一个, 在另一任务中, 极底部的标签, 它会忘记这个概念, 它会正确地推断一个极地说, 一个北极熊的图像还是一个比一个比一个宝? 我们把一些标准化的标比一个标准的标签 。