Through sequential construction of posteriors on observing data online, Bayes' theorem provides a natural framework for continual learning. We develop Variational Auto-Regressive Gaussian Processes (VAR-GPs), a principled posterior updating mechanism to solve sequential tasks in continual learning. By relying on sparse inducing point approximations for scalable posteriors, we propose a novel auto-regressive variational distribution which reveals two fruitful connections to existing results in Bayesian inference, expectation propagation and orthogonal inducing points. Mean predictive entropy estimates show VAR-GPs prevent catastrophic forgetting, which is empirically supported by strong performance on modern continual learning benchmarks against competitive baselines. A thorough ablation study demonstrates the efficacy of our modeling choices.
翻译:通过在在线观测数据上相继建造后台,拜斯的理论为持续学习提供了一个自然框架。我们开发了变化式自动递减高斯进程(VAR-GPs),这是一个原则性的后台更新机制,用于解决连续学习中的相继任务。通过依靠稀疏的引导点近似值为可伸缩的后台,我们提出了一个新型的自动递减变异分布法,它揭示了与巴伊西亚推论、预期传播和正方位引导点的现有结果的两条富有成果的连接。平均预测性昆虫估计显示VAR-GPs防止灾难性的遗忘,这在经验上得到了基于竞争性基线的现代持续学习基准的有力表现的支持。一项彻底的通货膨胀研究展示了我们模型选择的功效。