Federated learning has been widely applied in autonomous driving since it enables training a learning model among vehicles without sharing users' data. However, data from autonomous vehicles usually suffer from the non-independent-and-identically-distributed (non-IID) problem, which may cause negative effects on the convergence of the learning process. In this paper, we propose a new contrastive divergence loss to address the non-IID problem in autonomous driving by reducing the impact of divergence factors from transmitted models during the local learning process of each silo. We also analyze the effects of contrastive divergence in various autonomous driving scenarios, under multiple network infrastructures, and with different centralized/distributed learning schemes. Our intensive experiments on three datasets demonstrate that our proposed contrastive divergence loss further improves the performance over current state-of-the-art approaches.
翻译:联邦学习在自主驾驶中被广泛应用,因为它使得能够在不分享用户数据的情况下对车辆进行学习模式的培训,但是,自治车辆的数据通常受到非独立和识别分布(非二维)问题的影响,这可能会对学习过程的趋同产生负面影响。在本文件中,我们提出一种新的截然不同的损失,以解决自主驾驶中的非二维驾驶问题,办法是减少每个筒仓在当地学习过程中与传输模式的差异因素的影响。我们还分析了在多种网络基础设施下不同自主驾驶情景的对比性差异效应,以及不同的集中/分散学习计划。我们对三个数据集的密集实验表明,我们拟议的对比差异损失将进一步提高当前最先进的方法的绩效。</s>