Traditionally, federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server. Two natural challenges that FL algorithms face are heterogeneity in data across clients and collaboration of clients with {\em diverse resources}. In this work, we introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeD that facilitates collective (personalized model compression) training via \textit{knowledge distillation} (KD) among clients who have access to heterogeneous data and resources. For personalization, we allow clients to learn \textit{compressed personalized models} with different quantization parameters and model dimensions/structures. Towards this, first we propose an algorithm for learning quantized models through a relaxed optimization problem, where quantization values are also optimized over. When each client participating in the (federated) learning process has different requirements for the compressed model (both in model dimension and precision), we formulate a compressed personalization framework by introducing knowledge distillation loss for local client objectives collaborating through a global model. We develop an alternating proximal gradient update for solving this compressed personalization problem, and analyze its convergence properties. Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.
翻译:传统上,联谊学习(FL)旨在培训单一的全球模式,同时合作使用多个客户和服务器。对于个性化而言,FL算法面临的两个自然挑战是客户之间数据的差异性,以及客户与不同资源方的协作。在这项工作中,我们引入了\ textit{quantized} 和\ textit{个性化} FL算法 QuPED,通过\ textit{ 知识蒸馏} (KD),为能够获取不同数据和资源的客户提供集体(个性化模型压缩)培训。对于个性化而言,我们允许客户学习具有不同量化参数和模型维度/结构的竞争性个人化模型。为此,我们首先提出一种算法,通过一个宽松的优化问题来学习四分化模型,使四分化值值值得到优化。当每个参与(饱和)学习过程的客户对压缩模型有不同要求(无论是在模型层面还是精确度方面),我们通过引入知识蒸馏个人化框架,为本地客户目标的精度损失进行知识蒸馏,我们通过一个全球模型分析个人性化模型进行个人化。我们为个人性化模型更新了个人性化模型,我们为个人性化的自我升级更新了个人特性。