Despite rapid advancements in lifelong learning (LLL) research, a large body of research mainly focuses on improving the performance in the existing \textit{static} continual learning (CL) setups. These methods lack the ability to succeed in a rapidly changing \textit{dynamic} environment, where an AI agent needs to quickly learn new instances in a `single pass' from the non-i.i.d (also possibly temporally contiguous/coherent) data streams without suffering from catastrophic forgetting. For practical applicability, we propose a novel lifelong learning approach, which is streaming, i.e., a single input sample arrives in each time step, single pass, class-incremental, and subject to be evaluated at any moment. To address this challenging setup and various evaluation protocols, we propose a Bayesian framework, that enables fast parameter update, given a single training example, and enables any-time inference. We additionally propose an implicit regularizer in the form of snap-shot self-distillation, which effectively minimizes the forgetting further. We further propose an effective method that efficiently selects a subset of samples for online memory rehearsal and employs a new replay buffer management scheme that significantly boosts the overall performance. Our empirical evaluations and ablations demonstrate that the proposed method outperforms the prior works by large margins.
翻译:尽管终身学习(LLL)研究迅速取得进展,但大量研究主要侧重于改进现有学习(CL)设置中的绩效,这些方法缺乏在迅速变化的\textit{动态}环境中取得成功的能力,在这个环境中,AI代理机构需要迅速从非i.i.d(也可能是暂时毗连/相近)数据流中从“单一通道”中学习新的实例,而不会遭受灾难性的遗忘。为了实际适用性,我们提议一种全新的终身学习方法,即每个时间步骤、单个通行证、类内分级和随时评估都有一个单一输入样本。为了应对这一具有挑战性的设置和各种评价协议,我们提议一个巴伊西亚框架,根据一个单一的培训实例,使快速更新参数,并能够随时推断。我们又提议一种隐含的定律,以闪烁自我蒸馏为形式,从而有效地将新的遗忘进一步降到最低程度。我们进一步提议了一种有效的方法,即高效地选择一个前期缓冲模型,用以展示我们提出的整个业绩演练和演练方法。