The standard class-incremental continual learning setting assumes a set of tasks seen one after the other in a fixed and predefined order. This is not very realistic in federated learning environments where each client works independently in an asynchronous manner getting data for the different tasks in time-frames and orders totally uncorrelated with the other ones. We introduce a novel federated learning setting (AFCL) where the continual learning of multiple tasks happens at each client with different orderings and in asynchronous time slots. We tackle this novel task using prototype-based learning, a representation loss, fractal pre-training, and a modified aggregation policy. Our approach, called FedSpace, effectively tackles this task as shown by the results on the CIFAR-100 dataset using 3 different federated splits with 50, 100, and 500 clients, respectively. The code and federated splits are available at https://github.com/LTTM/FedSpace.
翻译:-
标准的不断学习设置假定了任务按固定且预定义的顺序一个接一个地出现。在联邦学习环境中,这不是非常现实,因为每个客户端都是独立运作的,在异步的时间段和顺序中获得不同任务的数据,这些时间段和顺序与其他客户端完全无关。我们引入了一种新的联邦学习环境(AFCL),其中连续学习多个任务发生在每个客户端,且以不同的顺序和异步时间槽的形式。我们使用基于原型的学习、表示丢失、分形预训练和修改后的汇聚策略解决这一新任务。我们的方法名为 FedSpace,在 CIFAR-100 数据集上采用 3 种不同的联邦划分,分别使用 50、100 和 500 个客户端,有效地解决了这一任务。代码和联邦划分可在https://github.com/LTTM/FedSpace获取。