Federated learning (FL) is an emerging distributed machine learning method that empowers in-situ model training on decentralized edge devices. However, multiple simultaneous training activities could overload resource-constrained devices. In this work, we propose a smart multi-tenant FL system, MuFL, to effectively coordinate and execute simultaneous training activities. We first formalize the problem of multi-tenant FL, define multi-tenant FL scenarios, and introduce a vanilla multi-tenant FL system that trains activities sequentially to form baselines. Then, we propose two approaches to optimize multi-tenant FL: 1) activity consolidation merges training activities into one activity with a multi-task architecture; 2) after training it for rounds, activity splitting divides it into groups by employing affinities among activities such that activities within a group have better synergy. Extensive experiments demonstrate that MuFL outperforms other methods while consuming 40% less energy. We hope this work will inspire the community to further study and optimize multi-tenant FL.
翻译:联邦学习(FL)是一种新兴的分布式机器学习方法,它增强了分散边缘装置的现场示范培训能力。然而,多个同时进行的培训活动可能会使资源限制的装置负荷过重。在这项工作中,我们建议建立一个智能多租户FL系统,即MuFL,以有效协调和实施同时的培训活动。我们首先将多租户FL问题正式化,定义多租租户FL情景,并引入一个香草多租户FL系统,按顺序对活动进行培训,形成基线。然后,我们提出两种办法,优化多租户FL:1)活动整合活动,将培训活动合并成一个活动,同时采用多任务结构;2)在培训回合后,活动通过在群体内活动之间采用紧密联系,将其分成一个小组,这样一类活动可以产生更好的协同作用。广泛的实验表明,MFLLF在消耗40%的能量减少的同时,优于其他方法。我们希望这项工作将激励社区进一步研究和优化多租户FL。