The traditional approach in FL tries to learn a single global model collaboratively with the help of many clients under the orchestration of a central server. However, learning a single global model might not work well for all clients participating in the FL under data heterogeneity. Therefore, the personalization of the global model becomes crucial in handling the challenges that arise with statistical heterogeneity and the non-IID distribution of data. Unlike prior works, in this work we propose a new approach for obtaining a personalized model from a client-level objective. This further motivates all clients to participate in federation even under statistical heterogeneity in order to improve their performance, instead of merely being a source of data and model training for the central server. To realize this personalization, we leverage finding a small subnetwork for each client by applying hybrid pruning (combination of structured and unstructured pruning), and unstructured pruning. Through a range of experiments on different benchmarks, we observed that the clients with similar data (labels) share similar personal parameters. By finding a subnetwork for each client ...
翻译:FL的传统方法试图在中央服务器的协调下,在许多客户的帮助下,合作学习一个单一的全球模式。然而,学习一个单一的全球模式对于在数据差异下参与FL的所有客户来说可能不是很好。因此,全球模式的个性化对于处理统计异质性和数据非IID分布所带来的挑战至关重要。与以前的工作不同,我们在此工作中提出了从客户一级的目标中获取个性化模式的新办法。这进一步激励所有客户参与联邦,即使是在统计异质下,以改善其业绩,而不是仅仅作为中央服务器的数据和模式培训的来源。为了实现这种个性化,我们通过采用混合的理算(结构化和无结构的理算)和无结构的理算,为每个客户寻找一个小型子网络。通过对不同基准进行的一系列实验,我们观察到,拥有类似数据的客户(标签)共享类似的个人参数。通过为每个客户寻找一个子网络,我们通过为每个客户寻找一个子网络来找到一个小的子网络。