In multiple federated learning schemes, a random subset of clients sends in each round their model updates to the server for aggregation. Although this client selection strategy aims to reduce communication overhead, it remains energy and computationally inefficient, especially when considering resource-constrained devices as clients. This is because conventional random client selection overlooks the content of exchanged information and falls short of providing a mechanism to reduce the transmission of semantically redundant data. To overcome this challenge, we propose clustering the clients with the aid of similarity metrics, where a single client from each of the formed clusters is selected in each round to participate in the federated training. To evaluate our approach, we perform an extensive feasibility study considering the use of nine statistical metrics in the clustering process. Simulation results reveal that, when considering a scenario with high data heterogeneity of clients, similarity-based clustering can reduce the number of required rounds compared to the baseline random client selection. In addition, energy consumption can be notably reduced from 23.93% to 41.61%, for those similarity metrics with an equivalent number of clients per round as the baseline random scheme.
翻译:暂无翻译