Federated learning (FL) is a newly emerged branch of AI that facilitates edge devices to collaboratively train a global machine learning model without centralizing data and with privacy by default. However, despite the remarkable advancement, this paradigm comes with various challenges. Specifically, in large-scale deployments, client heterogeneity is the norm which impacts training quality such as accuracy, fairness, and time. Moreover, energy consumption across these battery-constrained devices is largely unexplored and a limitation for wide-adoption of FL. To address this issue, we develop EAFL, an energy-aware FL selection method that considers energy consumption to maximize the participation of heterogeneous target devices. \scheme is a power-aware training algorithm that cherry-picks clients with higher battery levels in conjunction with its ability to maximize the system efficiency. Our design jointly minimizes the time-to-accuracy and maximizes the remaining on-device battery levels. \scheme improves the testing model accuracy by up to 85\% and decreases the drop-out of clients by up to 2.45$\times$.
翻译:联邦学习(FL)是AI的一个新兴分支,它便利了边际装置,在不集中数据和默认隐私的情况下合作培训全球机器学习模式,尽管取得了显著的进步,但这一模式也带来了各种挑战。具体地说,在大规模部署中,客户差异性是影响培训质量(如准确性、公正性和时间)的规范。此外,这些电池限制装置的能源消耗基本上没有开发,而且对广泛采用FL有限制。为了解决这个问题,我们开发了一个能见度的FLA,这是一种能源消耗的FL选择方法,考虑到能源消耗,以最大限度地扩大不同目标装置的参与。\ scheme是一种能耗培训算法,樱桃挑选客户,其电池水平较高,同时能够最大限度地提高系统效率。我们的设计共同最大限度地减少时间到准确性,并最大限度地增加其余的在电池水平。\ scheme将测试模型的准确性提高到85 ⁇,并将客户的流失率降低至245美元。