Federated learning (FL) enables organizations to collaboratively train models without sharing their datasets. Despite this advantage, recent studies show that both client updates and the global model can leak private information, limiting adoption in sensitive domains such as healthcare. Local differential privacy (LDP) offers strong protection by letting each participant privatize updates before transmission. However, existing LDP methods were designed for centralized training and introduce challenges in FL, including high resource demands that can cause client dropouts and the lack of reliable privacy guarantees under asynchronous participation. These issues undermine model generalizability, fairness, and compliance with regulations such as HIPAA and GDPR. To address them, we propose L-RDP, a DP method designed for LDP that ensures constant, lower memory usage to reduce dropouts and provides rigorous per-client privacy guarantees by accounting for intermittent participation.
翻译:暂无翻译