Federated Learning (FL) has been proved to be an effective learning framework when data cannot be centralized due to privacy, communication costs, and regulatory restrictions. When training deep learning models under an FL setting, people employ the predefined model architecture discovered in the centralized environment. However, this predefined architecture may not be the optimal choice because it may not fit data with non-identical and independent distribution (non-IID). Thus, we advocate automating federated learning (AutoFL) to improve model accuracy and reduce the manual design effort. We specifically study AutoFL via Neural Architecture Search (NAS), which can automate the design process. We propose a Federated NAS (FedNAS) algorithm to help scattered workers collaboratively searching for a better architecture with higher accuracy. We also build a system based on FedNAS. Our experiments on non-IID dataset show that the architecture searched by FedNAS can outperform the manually predefined architecture.
翻译:联邦学习(FL)被证明是一个有效的学习框架,因为由于隐私、通信成本和监管限制,数据无法集中管理。在根据FL设置培训深层次学习模式时,人们会采用在集中环境中发现的预设模型结构。然而,这一预设架构可能不是最佳选择,因为它可能不符合非同和独立分布(非IID)的数据。因此,我们提倡自动化联邦学习(AutoFL),以提高模型准确性并减少手工设计努力。我们通过神经建筑搜索(NAS)专门研究AutoFL,这可以使设计过程自动化。我们提议了一个联邦NAS(FedNAS)算法,以帮助分散的工人合作寻找更精确的更好架构。我们还建立了一个基于FedNAS的系统。我们在非IID数据集方面的实验表明,FedNAS搜索的架构可以超越手动的预设架构。