The wireless network is undergoing a trend from "onnection of things" to "connection of intelligence". With data spread over the communication networks and computing capability enhanced on the devices, distributed learning becomes a hot topic in both industrial and academic communities. Many frameworks, such as federated learning and federated distillation, have been proposed. However, few of them takes good care of obstacles such as the time-varying topology resulted by the characteristics of wireless networks. In this paper, we propose a distributed learning framework based on a scalable deep neural network (DNN) design. By exploiting the permutation equivalence and invariance properties of the learning tasks, the DNNs with different scales for different clients can be built up based on two basic parameter sub-matrices. Further, model aggregation can also be conducted based on these two sub-matrices to improve the learning convergence and performance. Finally, simulation results verify the benefits of the proposed framework by compared with some baselines.
翻译:无线网络正在经历一种趋势,从“事物的连接”到“情报的连接”。随着通信网络上的数据传播以及设备上的计算能力得到加强,分布式学习在工业和学术界都成为一个热门话题。提出了许多框架,如联合学习和联合蒸馏等。然而,其中很少几个框架对无线网络特性造成的时间变化的地形等障碍给予了很好的注意。在本文件中,我们提议了一个分布式学习框架,以可扩展的深神经网络(DNN)设计为基础。通过利用学习任务的变异等同和变量,不同客户的不同规模的DNN可以建立在两个基本参数子矩阵的基础上。此外,还可以根据这两个子矩阵进行模型汇总,以改进学习的趋同和性能。最后,模拟结果通过与某些基线进行比较,验证拟议框架的效益。