Federated Learning has been introduced as a new machine learning paradigm enhancing the use of local devices. At a server level, FL regularly aggregates models learned locally on distributed clients to obtain a more general model. Current solutions rely on the availability of large amounts of stored data at the client side in order to fine-tune the models sent by the server. Such setting is not realistic in mobile pervasive computing where data storage must be kept low and data characteristic can change dramatically. To account for this variability, a solution is to use the data regularly collected by the client to progressively adapt the received model. But such naive approach exposes clients to the well-known problem of catastrophic forgetting. To address this problem, we have defined a Federated Continual Learning approach which is mainly based on distillation. Our approach allows a better use of resources, eliminating the need to retrain from scratch at the arrival of new data and reducing memory usage by limiting the amount of data to be stored. This proposal has been evaluated in the Human Activity Recognition (HAR) domain and has shown to effectively reduce the catastrophic forgetting effect.
翻译:在服务器一级,FL定期汇总在分布客户中在当地学习的模型,以获得更普遍的模型。目前的解决办法取决于客户方面是否有大量储存的数据,以便微调服务器发送的模型。在移动式普遍计算中,这种设置是不现实的,因为数据储存必须保持低水平,数据特性可以发生急剧变化。考虑到这种变异性,解决办法是使用客户定期收集的数据逐步调整收到的模型。但是,这种天真的做法使客户暴露在众所周知的灾难性遗忘问题上。为了解决这一问题,我们定义了一种主要基于蒸馏的联邦持续学习方法。我们的方法可以更好地利用资源,消除在新数据到达时从零开始的重复的需要,并通过限制存储数据的数量来减少记忆的使用。这项建议已经在人类活动识别领域进行了评价,并表明可以有效减少灾难性的遗忘效果。