Federated Learning has been introduced as a new machine learning paradigm enhancing the use of local devices. At a server level, FL regularly aggregates models learned locally on distributed clients to obtain a more general model. In this way, no private data is sent over the network, and the communication cost is reduced. However, current solutions rely on the availability of large amounts of stored data at the client side in order to fine-tune the models sent by the server. Such setting is not realistic in mobile pervasive computing where data storage must be kept low and data characteristic (distribution) can change dramatically. To account for this variability, a solution is to use the data regularly collected by the client to progressively adapt the received model. But such naive approach exposes clients to the well-known problem of catastrophic forgetting. The purpose of this paper is to demonstrate this problem in the mobile human activity recognition context on smartphones.
翻译:在服务器一级,FL定期汇总在分布客户中在当地学习的模型,以获得更普遍的模型。这样,没有私人数据通过网络发送,通信成本降低。然而,目前的解决方案取决于客户方是否有大量储存的数据,以便微调服务器发送的模型。在移动式普遍计算中,这种设置是不现实的,因为数据存储必须保持低位,数据特性(分布)可以大幅改变。考虑到这种变异性,解决办法是利用客户定期收集的数据逐步调整接收的模式。但这种天真的做法使客户暴露在众所周知的灾难性遗忘问题上。本文的目的是在智能手机的移动人类活动识别背景下展示这一问题。