We demonstrate that merely analog transmissions and match filtering can realize the function of an edge server in federated learning (FL). Therefore, a network with massively distributed user equipments (UEs) can achieve large-scale FL without an edge server. We also develop a training algorithm that allows UEs to continuously perform local computing without being interrupted by the global parameter uploading, which exploits the full potential of UEs' processing power. We derive convergence rates for the proposed schemes to quantify their training efficiency. The analyses reveal that when the interference obeys a Gaussian distribution, the proposed algorithm retrieves the convergence rate of a server-based FL. But if the interference distribution is heavy-tailed, then the heavier the tail, the slower the algorithm converges. Nonetheless, the system run time can be largely reduced by enabling computation in parallel with communication, whereas the gain is particularly pronounced when communication latency is high. These findings are corroborated via excessive simulations.
翻译:我们证明,仅仅是模拟传输和匹配过滤器能够实现边端服务器在联合学习(FL)中的功能。 因此, 拥有大规模分布用户设备的网络可以在没有边缘服务器的情况下实现大规模 FL。 我们还开发了一种培训算法,允许 UES在不受全球参数上传干扰的情况下持续进行本地计算,而不受全球参数上传的干扰,这充分利用了 UES 处理能力的全部潜力。 我们为拟议方案得出了统一率,以量化其培训效率。 分析显示,当干扰符合高斯分布时, 拟议的算法可以检索基于服务器的 FL 的聚合率。 但如果干扰分布是重尾部, 则速度越慢。 尽管如此, 系统运行的时间可以通过与通信同时进行计算而大大缩短, 而当通信拉长率高时收益则特别明显。 这些结果通过过度的模拟得到证实。