Federated learning (FL) has recently gained considerable attention due to its ability to learn on decentralised data while preserving client privacy. However, it also poses additional challenges related to the heterogeneity of the participating devices, both in terms of their computational capabilities and contributed data. Meanwhile, Neural Architecture Search (NAS) has been successfully used with centralised datasets, producing state-of-the-art results in constrained or unconstrained settings. However, such centralised datasets may not be always available for training. Most recent work at the intersection of NAS and FL attempts to alleviate this issue in a cross-silo federated setting, which assumes homogeneous compute environments with datacenter-grade hardware. In this paper we explore the question of whether we can design architectures of different footprints in a cross-device federated setting, where the device landscape, availability and scale are very different. To this end, we design our system, FedorAS, to discover and train promising architectures in a resource-aware manner when dealing with devices of varying capabilities holding non-IID distributed data. We present empirical evidence of its effectiveness across different settings, spanning across three different modalities (vision, speech, text), and showcase its better performance compared to state-of-the-art federated solutions, while maintaining resource efficiency.
翻译:联邦学习(FL)最近由于有能力在维护客户隐私的同时学习分散化的数据而引起了相当的关注,最近由于在保持客户隐私的同时学习分散化的数据而引起了相当的关注;然而,它也带来了与参与设备在计算能力和贡献数据方面的差异性有关的额外挑战;与此同时,神经结构搜索(NAS)成功地与集中化数据集一起使用,在受限制或不受限制的环境中产生了最先进的结果;然而,这种集中化的数据集不一定总能用于培训;NAS和FL交叉交汇处最近的工作试图在一个跨筒式联结环境中缓解这一问题,假设使用数据中心级硬件进行同质的折叠环境;在本文件中,我们探讨我们是否能够在交叉拆换式联结环境中设计不同足迹的结构,在设备布局、可用性和规模非常不同的情况下产生最先进的结果;为此,我们设计我们的系统FedorAS,以了解资源的方式发现和培训有希望的架构,处理持有非II-D分布数据的不同能力装置时,我们展示其不同性能,同时展示其不同性能,在不同的环境中进行更好的格式上展示。