Federated Learning enables diverse devices to collaboratively train a shared model while keeping training data locally stored, avoiding the need for centralized cloud storage. Despite existing privacy measures, concerns arise from potential reverse engineering of gradients, even with added noise, revealing private data. To address this, recent research emphasizes using encrypted model parameters during training. This paper introduces a novel federated learning algorithm, leveraging coded local gradients without encryption, exchanging coded proxies for model parameters, and injecting surplus noise for enhanced privacy. Two algorithm variants are presented, showcasing convergence and learning rates adaptable to coding schemes and raw data characteristics. Two encryption-free implementations with fixed and random coding matrices are provided, demonstrating promising simulation results from both federated optimization and machine learning perspectives.
翻译:暂无翻译