Federated Learning (FL) presents a paradigm shift towards distributed model training across isolated data repositories or edge devices without explicit data sharing. Despite of its advantages, FL is inherently less efficient than centralized training models, leading to increased energy consumption and, consequently, higher carbon emissions. In this paper, we propose CAMA, a carbon-aware FL framework, promoting the operation on renewable excess energy and spare computing capacity, aiming to minimize operational carbon emissions. CAMA introduces a dynamic model adaptation strategy which adapts the model sizes based on the availability of energy and computing resources. Ordered dropout is integratged to enable the aggregation with varying model sizes. Empirical evaluations on real-world energy and load traces demonstrate that our method achieves faster convergence and ensures equitable client participation, while scaling efficiently to handle large numbers of clients. The source code of CAMA is available at https://github.com/denoslab/CAMA.
翻译:暂无翻译