Learning continually from non-stationary data streams is a challenging research topic of growing popularity in the last few years. Being able to learn, adapt, and generalize continually in an efficient, effective, and scalable way is fundamental for a sustainable development of Artificial Intelligent systems. However, an agent-centric view of continual learning requires learning directly from raw data, which limits the interaction between independent agents, the efficiency, and the privacy of current approaches. Instead, we argue that continual learning systems should exploit the availability of compressed information in the form of trained models. In this paper, we introduce and formalize a new paradigm named "Ex-Model Continual Learning" (ExML), where an agent learns from a sequence of previously trained models instead of raw data. We further contribute with three ex-model continual learning algorithms and an empirical setting comprising three datasets (MNIST, CIFAR-10 and CORe50), and eight scenarios, where the proposed algorithms are extensively tested. Finally, we highlight the peculiarities of the ex-model paradigm and we point out interesting future research directions.
翻译:从非静止数据流中不断学习是一个具有挑战性的研究课题,在过去几年中越来越受欢迎。能够以高效、有效和可扩展的方式不断学习、适应和普及,对于人造智能系统的可持续发展至关重要。然而,以代理为中心的持续学习观点需要直接从原始数据中学习,这限制了独立代理人之间的互动、效率和当前方法的隐私。相反,我们认为,持续学习系统应当利用以经过培训的模式形式提供的压缩信息。在本文中,我们引入并正式确定一个新的范式,名为“超模持续学习 ” ( Ex-Model-Continual Learning ) (ExML ), 代理从一系列以前训练过的模型中学习,而不是从原始数据中学习。我们进一步贡献了三个由三个数据集( MNIST, CIFAR-10 和 CORe50) 构成的模型连续算法和经验设置,以及八个假设,其中拟议的算法得到了广泛的测试。最后,我们强调前模型范式模式的特殊性,并指出了有趣的未来研究方向。