While deep neural networks (DNNs) have achieved impressive classification performance in closed-world learning scenarios, they typically fail to generalize to unseen categories in dynamic open-world environments, in which the number of concepts is unbounded. In contrast, human and animal learners have the ability to incrementally update their knowledge by recognizing and adapting to novel observations. In particular, humans characterize concepts via exclusive (unique) sets of essential features, which are used for both recognizing known classes and identifying novelty. Inspired by natural learners, we introduce a Sparse High-level-Exclusive, Low-level-Shared feature representation (SHELS) that simultaneously encourages learning exclusive sets of high-level features and essential, shared low-level features. The exclusivity of the high-level features enables the DNN to automatically detect out-of-distribution (OOD) data, while the efficient use of capacity via sparse low-level features permits accommodating new knowledge. The resulting approach uses OOD detection to perform class-incremental continual learning without known class boundaries. We show that using SHELS for novelty detection results in statistically significant improvements over state-of-the-art OOD detection approaches over a variety of benchmark datasets. Further, we demonstrate that the SHELS model mitigates catastrophic forgetting in a class-incremental learning setting,enabling a combined novelty detection and accommodation framework that supports learning in open-world settings
翻译:虽然深神经网络(DNNS)在封闭世界的学习情景中取得了令人印象深刻的分类业绩,但它们通常未能在动态的开放世界环境中向看不见的类别推广,在这种环境中,概念的数量是没有限制的;相反,人类和动物学习者通过认识和适应新的观测,能够逐步更新其知识;特别是,人类通过独家(独家)基本特征来描述概念,这些特征既用于识别已知的班级,又用于识别新颖。在自然学习者的启发下,我们引入了一种粗略的高层次、排外、低层次特征代表制(SHELS),鼓励在动态的开放世界环境中学习独家专有的高级特征和基本、共享的低层次特征。高层次特征的排他性使DNNW能够自动检测出分配之外的数据,而通过稀疏的低层次特征有效使用能力可以容纳新的知识。由此采用OODD检测方法,在不为已知的班级设置下进行高层次持续学习。我们显示,使用SHLS进行创新的检测结果,在统计上显著地改进了SLS级的升级的测深标准,在级测测测测测标准中,在我们的轨道上,在新模型中展示了我们所测测测测得的轨道上的数据。