Vehicular networks enable vehicles support real-time vehicular applications through training data. Due to the limited computing capability, vehicles usually transmit data to a road side unit (RSU) at the network edge to process data. However, vehicles are usually reluctant to share data with each other due to the privacy issue. For the traditional federated learning (FL), vehicles train the data locally to obtain a local model and then upload the local model to the RSU to update the global model, thus the data privacy can be protected through sharing model parameters instead of data. The traditional FL updates the global model synchronously, i.e., the RSU needs to wait for all vehicles to upload their models for the global model updating. However, vehicles may usually drive out of the coverage of the RSU before they obtain their local models through training, which reduces the accuracy of the global model. It is necessary to propose an asynchronous federated learning (AFL) to solve this problem, where the RSU updates the global model once it receives a local model from a vehicle. However, the amount of data, computing capability and vehicle mobility may affect the accuracy of the global model. In this paper, we jointly consider the amount of data, computing capability and vehicle mobility to design an AFL scheme to improve the accuracy of the global model. Extensive simulation experiments have demonstrated that our scheme outperforms the FL scheme
翻译:由于计算能力有限,车辆通常需要等待所有车辆上传其全球模型更新模型的模型。然而,车辆通常由于隐私问题而不愿彼此共享数据。对于传统的联合学习(FL),车辆在当地培训数据以获得本地模型,然后将本地模型上传到RSU以更新全球模型,从而通过共享模型参数而不是数据来保护数据隐私。传统的FL更新全球模型时,传统FL同步更新全球模型,即RSU需要等待所有车辆上传其全球模型更新模型。然而,车辆通常会在通过培训获得当地模型之前退出RSU的覆盖范围,从而降低全球模型的准确性。有必要建议一种无连接的联邦模型学习(AFL)来解决这一问题,一旦从车辆接收到本地模型,RSU就能够更新全球模型。然而,数据的数量、计算能力和车辆机动性可能会影响全球模型的准确性设计,从而能够共同改进AFL模型的模型。