We consider a federated learning framework in which a parameter server (PS) trains a global model by using $n$ clients without actually storing the client data centrally at a cloud server. Focusing on a setting where the client datasets are fast changing and highly temporal in nature, we investigate the timeliness of model updates and propose a novel timely communication scheme. Under the proposed scheme, at each iteration, the PS waits for $m$ available clients and sends them the current model. Then, the PS uses the local updates of the earliest $k$ out of $m$ clients to update the global model at each iteration. We find the average age of information experienced by each client and numerically characterize the age-optimal $m$ and $k$ values for a given $n$. Our results indicate that, in addition to ensuring timeliness, the proposed communication scheme results in significantly smaller average iteration times compared to random client selection without hurting the convergence of the global learning task.
翻译:我们考虑一个联合学习框架,在这个框架中,参数服务器(PS)通过使用美元客户对一个全球模型进行培训,而无需在云端服务器上实际集中存储客户数据。我们关注客户数据集变化迅速且时间性高的设置,调查模型更新的及时性,并提出新的及时通信计划。在拟议的方案下,每循环一次,PS都等待可用客户的美元,并将当前模式发送给他们。然后,PS利用当地最早更新的美元客户中最早的美元客户更新,在每个循环中更新全球模型。我们发现每个客户的平均信息年龄,从数字上描述一个给定美元的时间-最理想的美元和美元值。我们的结果表明,除了确保及时性外,拟议的通信计划与随机选择客户相比,平均的浏览时间要小得多,但不影响全球学习任务的趋同。