Personalization in Federated Learning (FL) aims to modify a collaboratively trained global model according to each client. Current approaches to personalization in FL are at a coarse granularity, i.e. all the input instances of a client use the same personalized model. This ignores the fact that some instances are more accurately handled by the global model due to better generalizability. To address this challenge, this work proposes Flow, a fine-grained stateless personalized FL approach. Flow creates dynamic personalized models by learning a routing mechanism that determines whether an input instance prefers the local parameters or its global counterpart. Thus, Flow introduces per-instance routing in addition to leveraging per-client personalization to improve accuracies at each client. Further, Flow is stateless which makes it unnecessary for a client to retain its personalized state across FL rounds. This makes Flow practical for large-scale FL settings and friendly to newly joined clients. Evaluations on Stackoverflow, Reddit, and EMNIST datasets demonstrate the superiority in prediction accuracy of Flow over state-of-the-art non-personalized and only per-client personalized approaches to FL.
翻译:联邦学习中的个性化(FL)旨在根据每个客户修改一个经过协作培训的全球模式。目前FL的个人化做法是粗略的颗粒,即客户的所有输入案例都使用相同的个性化模式。这忽略了由于更加普遍化,有些情况由于更加精确地由全球模式处理这一事实。为了应对这一挑战,这项工作提议了流动,这是一种细微的无国籍个人化的FL方法。 流动通过学习一种路径化机制来创造动态的个人化模式,这种机制决定输入实例是否偏好当地参数或其全球对应方。因此,流动除了利用每个客户的个人化来改进每个客户的适应性外,还引入了每个客户的每个个人化路径。此外,流动是无国籍的,这使得客户没有必要在整个FL轮中保持其个性化状态。这使得大规模FL环境的流动性和对新加入的客户的友好性流动变得切实可行。关于通货回流、Redit和EMNIST数据集的评价表明预测超越国家非个人化和仅个人客户化方法的准确性。