Federated learning (FL) has recently gained considerable attention due to its ability to use decentralised data while preserving privacy. However, it also poses additional challenges related to the heterogeneity of the participating devices, both in terms of their computational capabilities and contributed data. Meanwhile, Neural Architecture Search (NAS) has been successfully used with centralised datasets, producing state-of-the-art results in constrained (hardware-aware) and unconstrained settings. However, even the most recent work laying at the intersection of NAS and FL assumes homogeneous compute environment with datacenter-grade hardware and does not address the issues of working with constrained, heterogeneous devices. As a result, practical usage of NAS in a federated setting remains an open problem that we address in our work. We design our system, FedorAS, to discover and train promising architectures when dealing with devices of varying capabilities holding non-IID distributed data, and present empirical evidence of its effectiveness across different settings. Specifically, we evaluate FedorAS across datasets spanning three different modalities (vision, speech, text) and show its better performance compared to state-of-the-art federated solutions, while maintaining resource efficiency.
翻译:联邦学习(FL)最近因其在保护隐私的同时使用分散化数据的能力而引起了相当的关注,但对于参与装置的计算能力和贡献数据的不同性,也带来了额外的挑战。与此同时,神经结构搜索(NAS)成功地与集中化数据集一起使用,产生了限制(硬件认知)和不受限制的环境方面的最先进的结果。然而,即使最近在NAS和FL交叉点上的工作也以数据中心级硬件进行同质的计算环境,没有解决使用受限制的、多式设备的问题。因此,在联合环境下实际使用NAS仍然是我们在工作中处理的一个公开问题。我们设计了我们的系统FedorAS,以便在处理掌握非IID分布数据的不同能力装置时发现和培训有希望的结构,并提出了在不同环境中显示其有效性的经验证据。具体地说,我们评估FedorAS跨数据集在三种不同模式(配置、语音、文本)之间进行同比,同时显示其效率,同时保持资源更新的州化解决方案。我们设计了我们的系统FedorAS,以发现和培训有希望的架构。