We focus on the continual learning problem where the tasks arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the distributed estimation framework. We consider the well-established distributed learning algorithm CoCoA. We derive closed form expressions for the iterations for the overparametrized case. We illustrate the convergence and the error performance of the algorithm based on the over/under-parametrization of the problem. Our results show that depending on the problem dimensions and data generation assumptions, CoCoA can perform continual learning over a sequence of tasks, i.e., it can learn a new task without forgetting previously learned tasks, with access only to one task at a time.
翻译:我们注重的是持续学习问题,即任务按先后顺序到达,目的是在不损及以往所看到的任务的情况下顺利完成新到的任务。与持续学习的文献相比,我们研究分散的估算框架。我们考虑完善的分布式学习算法 CoCoA。我们为过度平衡的个案的迭代制作封闭形式的表达方式。我们根据问题的过度/不完全平衡来说明算法的趋同和错误性能。我们的结果显示,根据问题大小和数据生成假设,COoA可以对一系列任务进行持续学习,也就是说,它可以在不忘记以往学到的任务的情况下学习新任务,同时只能完成一项任务。