The ever-growing data privacy concerns have transformed machine learning (ML) architectures from centralized to distributed, leading to federated learning (FL) and split learning (SL) as the two most popular privacy-preserving ML paradigms. However, implementing either conventional FL or SL alone with diverse network conditions (e.g., device-to-device (D2D) and cellular communications) and heterogeneous clients (e.g., heterogeneous computation/communication/energy capabilities) may face significant challenges, particularly poor architecture scalability and long training time. To this end, this article proposes two novel hybrid distributed ML architectures, namely, hybrid split FL (HSFL) and hybrid federated SL (HFSL), by combining the advantages of both FL and SL in D2D-enabled heterogeneous wireless networks. Specifically, the performance comparison and advantages of HSFL and HFSL are analyzed generally. Promising open research directions are presented to offer commendable reference for future research. Finally, primary simulations are conducted upon considering three datasets under non-independent and identically distributed settings, to verify the feasibility of our proposed architectures, which can significantly reduce communication/computation cost and training time, as compared with conventional FL and SL.
翻译:不断增长的数据隐私问题使机器学习(ML)结构从集中到分布,导致联合学习(FL)和分解学习(SL)结构转变为最受欢迎的两种隐私保护ML模式,然而,如果将传统FL或SL的优势与不同网络条件(例如,设备对设备(D2D)和蜂窝通信)和各种客户(例如,各种计算/通信/能源能力)结合起来,则机器学习(ML)结构从中央到分布式结构转变为集中式结构,特别是结构可缩缩缩缩和长时间培训时间等重大挑战,为此,本篇文章提出两种新型混合混合分布式ML结构,即混合分裂FL(HFL)和混合组合组合的SL(HFSL)结构,将FL和SL的优势与D驱动的多元无线网络结合起来,具体地分析HSFL和HFFL的性能比较和优势,为今后的研究提供令人称道的参考。最后,在考虑在不独立和同样分布的环境中的三个数据集时进行初级模拟,以核实与FPL的拟议结构的可行性,这可以大大降低成本和比较。