Neural Architecture Search (NAS) is a collection of methods to craft the way neural networks are built. We apply this idea to Federated Learning (FL), wherein predefined neural network models are trained on the client/device data. This approach is not optimal as the model developers can't observe the local data, and hence, are unable to build highly accurate and efficient models. NAS is promising for FL which can search for global and personalized models automatically for the non-IID data. Most NAS methods are computationally expensive and require fine-tuning after the search, making it a two-stage complex process with possible human intervention. Thus there is a need for end-to-end NAS which can run on the heterogeneous data and resource distribution typically seen in the FL scenario. In this paper, we present an effective approach for direct federated NAS which is hardware agnostic, computationally lightweight, and a one-stage method to search for ready-to-deploy neural network models. Our results show an order of magnitude reduction in resource consumption while edging out prior art in accuracy. This opens up a window of opportunity to create optimized and computationally efficient federated learning systems.
翻译:神经结构搜索(NAS)是设计神经网络构建方法的集合。 我们将这一想法应用到Federal Learning(FL), 即预先定义的神经网络模型在客户/设备数据上接受培训。 这种方法并不理想, 因为模型开发者无法观察本地数据, 因而无法建立高度准确有效的模型。 NAS为FL带来了希望, FL可以自动为非IID数据搜索全球和个人化模型。 大多数NAS 方法在计算上非常昂贵, 需要在搜索后进行微调, 使它成为两阶段复杂的过程, 并有可能进行人类干预。 因此, 需要终端到终端的NAS, 它可以运行在FL 情景中常见的混杂数据和资源分布上运行。 在本文中, 我们提出了一个直接联合的NAS 有效方法, 即硬性、 计算性轻度, 和 搜索准备部署的神经网络模型的一阶梯方法。 我们的结果显示资源消耗量的降幅排序, 同时将退出以前的艺术精准性学习系统。 这打开了节制的优化计算机会窗口 。