Recently, some Neural Architecture Search (NAS) techniques are proposed for the automatic design of Graph Convolutional Network (GCN) architectures. They bring great convenience to the use of GCN, but could hardly apply to the Federated Learning (FL) scenarios with distributed and private datasets, which limit their applications. Moreover, they need to train many candidate GCN models from scratch, which is inefficient for FL. To address these challenges, we propose FL-AGCNS, an efficient GCN NAS algorithm suitable for FL scenarios. FL-AGCNS designs a federated evolutionary optimization strategy to enable distributed agents to cooperatively design powerful GCN models while keeping personal information on local devices. Besides, it applies the GCN SuperNet and a weight sharing strategy to speed up the evaluation of GCN models. Experimental results show that FL-AGCNS can find better GCN models in short time under the FL framework, surpassing the state-of-the-arts NAS methods and GCN models.
翻译:最近,提出了一些神经结构搜索(NAS)技术,用于自动设计图形革命网络(GCN)结构,这些技术为GCN的使用提供了极大的便利,但几乎无法适用于联邦学习(FL)情景,其分布式和私人数据集限制了其应用;此外,这些技术需要从零开始培训许多候选GCN模型,这对FL来说效率不高。为了应对这些挑战,我们建议FL-AGCNS,一种适合FL情景的高效GCNNAS算法。FL-AGCNS设计了一种联合化的进化优化战略,使分布剂能够合作设计强大的GCN模型,同时保留当地设备的个人信息。此外,它运用GCN超级网络和权重共享战略来加快对GCN模型的评估。实验结果表明,FL-AGCNS可以在FL框架下在短期内找到更好的GCN模型,超过了最新的国家NAS方法和GCN模型。