Today, various machine learning (ML) applications offer continuous data processing and real-time data analytics at the edge of a wireless network. Distributed ML solutions are seriously challenged by resource heterogeneity, in particular, the so-called straggler effect. To address this issue, we design a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices while characterizing privacy leakage. The proposed solution captures system dynamics, including data (time-dependent learning model, varied intensity of data arrivals), device (diverse computational resources and volume of training data), and deployment (varied locations and D2D graph connectivity). We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time. The resulting optimization problem provides suboptimal compression parameters, which improve the total training time. Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
翻译:今天,各种机器学习(ML)应用程序在无线网络边缘提供持续的数据处理和实时数据分析。分布式 ML解决方案受到资源异质性的严重挑战,特别是所谓的分层效应。为了解决这个问题,我们设计了一种新的设备到设备(D2D)辅助编码化的联邦化学习方法(D2D-CFL),用于各设备之间的负载平衡,同时说明隐私渗漏的特点。拟议解决方案捕捉了系统动态,包括数据(时间依赖式学习模式、不同数据到达强度)、设备(不同计算资源和培训数据数量)和部署(不同地点和D2D图形连接)等数据(不同计算资源和数据数量),我们为达到最低处理时间设定了最佳压缩率,并建立了与合并时间的联系。由此产生的优化问题提供了亚优化压缩参数,从而改善了整个培训时间。我们的拟议方法有利于实时协作应用,用户不断生成培训数据。