Federated learning (FL) supports privacy-preserving, decentralized machine learning (ML) model training by keeping data on client devices. However, non-independent and identically distributed (non-IID) data across clients biases updates and degrades performance. To alleviate these issues, we propose Clust-PSI-PFL, a clustering-based personalized FL framework that uses the Population Stability Index (PSI) to quantify the level of non-IID data. We compute a weighted PSI metric, $WPSI^L$, which we show to be more informative than common non-IID metrics (Hellinger, Jensen-Shannon, and Earth Mover's distance). Using PSI features, we form distributionally homogeneous groups of clients via K-means++; the number of optimal clusters is chosen by a systematic silhouette-based procedure, typically yielding few clusters with modest overhead. Across six datasets (tabular, image, and text modalities), two partition protocols (Dirichlet with parameter $α$ and Similarity with parameter S), and multiple client sizes, Clust-PSI-PFL delivers up to 18% higher global accuracy than state-of-the-art baselines and markedly improves client fairness by a relative improvement of 37% under severe non-IID data. These results establish PSI-guided clustering as a principled, lightweight mechanism for robust PFL under label skew.
翻译:联邦学习通过在客户端设备上保留数据,支持隐私保护的分布式机器学习模型训练。然而,客户端间非独立同分布的数据会导致更新偏差并降低性能。为缓解这些问题,我们提出了Clust-PSI-PFL,一种基于聚类的个性化联邦学习框架,该框架使用群体稳定性指数来量化非独立同分布数据的程度。我们计算了一个加权PSI度量$WPSI^L$,并证明其比常见的非独立同分布度量(Hellinger距离、Jensen-Shannon散度和Earth Mover距离)更具信息量。利用PSI特征,我们通过K-means++算法形成分布同质的客户端群组;最优聚类数量通过一种基于轮廓系数的系统化程序选择,通常能以适度的开销产生少量聚类。在六个数据集(表格、图像和文本模态)、两种数据划分协议(参数为$α$的Dirichlet分布和参数为S的相似性划分)以及多种客户端规模下,Clust-PSI-PFL相比最先进的基线方法实现了高达18%的全局准确率提升,并在严重非独立同分布数据下将客户端公平性相对提高了37%。这些结果表明,PSI引导的聚类是一种在标签偏斜下实现鲁棒个性化联邦学习的原理性轻量机制。