We focus on the continual learning problem where the tasks arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the distributed estimation framework. We consider the well-established distributed learning algorithm \cocoa{}. We derive closed form expressions for the iterations for the overparametrized case. We illustrate the convergence and the error performance of the algorithm based on the over/under-parametrization of the problem. Our results show that depending on the problem dimensions and data generation assumptions, \cocoa{} can perform continual learning over a sequence of tasks, i.e., it can learn a new task without forgetting previously learned tasks, with access only to one task at a time.
翻译:我们注重的是持续学习问题,即任务按顺序到达,目的是在不损及以前所看到的任务的情况下很好地完成新到的任务。与持续学习的文献相比,我们研究分散的估算框架。我们考虑已经确立的分布式学习算法 。我们为过度平衡的个案的迭代生成封闭形式的表达方式。我们根据问题的过度/不对称来说明算法的趋同和错误性能。我们的结果显示,根据问题大小和数据生成假设, ococoa ⁇ 可以在一系列任务中持续学习, 也就是说, 它可以在不忘记先前学到的任务的情况下学习新任务, 一次只能完成一项任务。