Federated Learning (FL) enables distributed training by learners using local data, thereby enhancing privacy and reducing communication. However, it presents numerous challenges relating to the heterogeneity of the data distribution, device capabilities, and participant availability as deployments scale, which can impact both model convergence and bias. Existing FL schemes use random participant selection to improve fairness; however, this can result in inefficient use of resources and lower quality training. In this work, we systematically address the question of resource efficiency in FL, showing the benefits of intelligent participant selection, and incorporation of updates from straggling participants. We demonstrate how these factors enable resource efficiency while also improving trained model quality.
翻译:联邦学习(FL)使学生能够利用当地数据进行分布式培训,从而增进隐私和减少交流;然而,它提出了许多挑战,涉及数据分配、装置能力和作为部署规模的参与者的可用性不一,这可能影响模式趋同和偏见;现有的FL计划使用随机选择参与者来提高公平性;但是,这可能导致资源利用效率低,培训质量低;在这项工作中,我们系统地处理FL的资源效率问题,显示明智参与者选择的好处,并纳入来自分散参与者的最新信息;我们展示这些因素如何使资源效率提高,同时提高经过培训的模型质量。