Federated Learning (FL) is expected to play a prominent role for privacy-preserving machine learning (ML) in autonomous vehicles. FL involves the collaborative training of a single ML model among edge devices on their distributed datasets while keeping data locally. While FL requires less communication compared to classical distributed learning, it remains hard to scale for large models. In vehicular networks, FL must be adapted to the limited communication resources, the mobility of the edge nodes, and the statistical heterogeneity of data distributions. Indeed, a judicious utilization of the communication resources alongside new perceptive learning-oriented methods are vital. To this end, we propose a new architecture for vehicular FL and corresponding learning and scheduling processes. The architecture utilizes vehicular-to-vehicular(V2V) resources to bypass the communication bottleneck where clusters of vehicles train models simultaneously and only the aggregate of each cluster is sent to the multi-access edge (MEC) server. The cluster formation is adapted for single and multi-task learning, and takes into account both communication and learning aspects. We show through simulations that the proposed process is capable of improving the learning accuracy in several non-independent and-identically-distributed (non-i.i.d) and unbalanced datasets distributions, under mobility constraints, in comparison to standard FL.
翻译:联邦学习联合会(FL)预计将在自发车辆中的隐私保存机器学习(ML)方面发挥突出作用。FL涉及在其分布式数据集的边缘设备中合作培训单一ML模型,同时保留当地的数据。FL要求的通信比传统分布式学习少,但对于大型模型来说仍然难以推广。在车辆网络中,FL必须适应有限的通信资源、边缘节点的流动性和数据分布的统计差异性。事实上,明智地利用通信资源以及新的感知式学习方法至关重要。为此,我们提出一个新的通用FL结构,以及相应的学习和排期进程。虽然FL要求与传统分布式学习相比通信较少,但对于大型模型来说,F2V资源仍然难以推广。FL必须适应于通信瓶颈,因为车辆集群同时培训模型,而每个组群的总数被送到多接入端服务器。集群的形成适应于单一和多任务学习,并考虑到通信和学习两个方面。我们通过模拟、不平稳的分布,显示在不平稳的分布中,在不平稳和不稳定的分配中,我们通过模拟、不稳定的、不稳定的分析,显示不稳定的分配过程,在不稳定的、不稳定的分配中,在不稳定的分配中,在不稳定的分配中,在不稳定的分配中显示不稳定的、不稳定的分配中,在不稳定的、不稳定的分配中,在不稳定的分配中,在不稳定的分配中显示不稳定的、不稳定的分配中,在不稳定的、不稳定的分配中显示不稳定的、不稳定的分配过程。