Federated learning (FL) has emerged to jointly train a model with distributed data sets in IoT while avoiding the need for central data collection. Due to limited observation range, such data sets can only reflect local information, which limits the quality of trained models. In practical network, the global information and local observations always coexist, which requires joint consideration for learning to make reasonable policy. However, in horizontal FL among distributed clients, the central agency only acts as a model aggregator without utilizing its global features to further improve the model. This could largely degrade the performance in some missions such as flow prediction, where the global information could obviously enhance the accuracy. Meanwhile, such global feature may not be directly transmitted to agents for data security. Then how to utilize the global observation residing in the central agency while protecting its safety rises up as an important problem in FL. In this paper, we developed the vertical-horizontal federated learning (VHFL) process, where the global feature is shared with the agents in a procedure similar to vertical FL without extra communication rounds. Considering the delay and packet loss, we analyzed its convergence in the network system and validated its performance by experiments. The proposed VHFL could enhance the accuracy compared with the horizontal FL while protecting the security of global data.
翻译:由于观测范围有限,这类数据集只能反映当地信息,从而限制经过培训的模型的质量。在实际网络中,全球信息和地方观测总是共存的,这要求共同考虑学习以制定合理的政策。然而,在分布客户之间的横向FL中,中央机构仅作为模型聚合器,而没有利用其全球特征进一步改进模型。这在很大程度上会降低某些特派团的业绩,例如流动预测,因为全球信息显然可以提高准确性。与此同时,这种全球特征可能不会直接传递给数据安全代理商。然后如何利用位于中央机构的全球观测,同时保护其安全,这是FL的一个重要问题。在本文件中,我们开发了纵向横向联动学习(VHFL)程序,在不增加通信回合的情况下与代理商分享全球特征,与纵向FL程序类似。考虑到延迟和包装损失,我们分析了网络系统中的趋同性,并通过横向试验来验证其性能,同时将甚高频数据与横向保护进行比较。