Traditionally, federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server. Two natural challenges that FL algorithms face are heterogeneity in data across clients and collaboration of clients with {\em diverse resources}. In this work, we introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeL that facilitates collective training with heterogeneous clients while respecting resource diversity. For personalization, we allow clients to learn \textit{compressed personalized models} with different quantization parameters depending on their resources. Towards this, first we propose an algorithm for learning quantized models through a relaxed optimization problem, where quantization values are also optimized over. When each client participating in the (federated) learning process has different requirements of the quantized model (both in value and precision), we formulate a quantized personalization framework by introducing a penalty term for local client objectives against a globally trained model to encourage collaboration. We develop an alternating proximal gradient update for solving this quantized personalization problem, and we analyze its convergence properties. Numerically, we show that optimizing over the quantization levels increases the performance and we validate that QuPeL outperforms both FedAvg and local training of clients in a heterogeneous setting.
翻译:传统上, 联邦学习( FL) 旨在培训单一的全球模式, 同时合作使用多个客户和服务器。 FL 算法所面临的两个自然挑战是客户之间数据的不同性, 以及拥有不同资源的客户的协作。 在这项工作中, 我们引入了\ textit{ quantized} 和\ textit{ 个化} FL 算法 QPEL, 有利于与不同客户进行集体培训, 同时尊重资源的多样性。 关于个性化, 我们允许客户根据资源的不同量化参数学习\ textit{ 压缩的个人化模型} 。 为此, 我们首先提出一种通过宽松的优化问题来学习量化模型的算法, 其中量化值也得到优化。 当每个参与( 联邦) 学习进程的客户对量化模型有不同的要求( 包括价值和 精度) QuPEL QL, 我们制定了一个量化的个人化框架, 通过对本地客户的目标引入一个惩罚术语, 来鼓励合作。 我们开发一个交替性梯度更新, 解决这个量化的个人化个人化问题, 我们分析其质量化的客户的升级程度, 。