Current deep learning methods are regarded as favorable if they empirically perform well on dedicated test sets. This mentality is seamlessly reflected in the resurfacing area of continual learning, where consecutively arriving data is investigated. The core challenge is framed as protecting previously acquired representations from being catastrophically forgotten. However, comparison of individual methods is nevertheless performed in isolation from the real world by monitoring accumulated benchmark test set performance. The closed world assumption remains predominant, i.e. models are evaluated on data that is guaranteed to originate from the same distribution as used for training. This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown and corrupted instances. In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era. Hence, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks. Finally, the established synergies are supported empirically, showing joint improvement in alleviating catastrophic forgetting, querying data, selecting task orders, while exhibiting robust open world application.
翻译:目前深层次的学习方法如果在专门的测试组上表现良好,就被认为是有利的。这种心态在连续不断学习的重现领域得到完美地反映于连续不断获得的数据调查中。核心挑战在于保护先前获得的表达方式不被灾难性地遗忘。然而,通过监测累积的基准测试组的性能,在与现实世界隔绝的情况下对个别方法进行比较,监测累积的基准测试组的性能。封闭世界的假设仍然占主导地位,即根据保证来自培训所用相同分布的数据对模型进行评价。这带来了巨大的挑战,因为神经网络众所周知,对未知和腐败的事例提供过于自信的虚假预测。在这个工作中,我们严格地调查文献,并论证开放的确认、查明所观察到的一组之外的未知例子以及积极学习的相邻领域,为尽量提高预期的性能收益而查询数据,往往在深层次的学习时代被忽略。因此,我们提出了一种统一的观点,将持续学习、积极学习和在深层的神经网络中公开认识联系起来。最后,已经建立起来的协同作用得到了实证地支持,显示在减轻灾难性、查询数据、选择任务方面共同改进。