Object detection with on-board sensors (e.g., lidar, radar, and camera) play a crucial role in autonomous driving (AD), and these sensors complement each other in modalities. While crowdsensing may potentially exploit these sensors (of huge quantity) to derive more comprehensive knowledge, \textit{federated learning} (FL) appears to be the necessary tool to reach this potential: it enables autonomous vehicles (AVs) to train machine learning models without explicitly sharing raw sensory data. However, the multimodal sensors introduce various data heterogeneity across distributed AVs (e.g., label quantity skews and varied modalities), posing critical challenges to effective FL. To this end, we present AutoFed as a heterogeneity-aware FL framework to fully exploit multimodal sensory data on AVs and thus enable robust AD. Specifically, we first propose a novel model leveraging pseudo-labeling to avoid mistakenly treating unlabeled objects as the background. We also propose an autoencoder-based data imputation method to fill missing data modality (of certain AVs) with the available ones. To further reconcile the heterogeneity, we finally present a client selection mechanism exploiting the similarities among client models to improve both training stability and convergence rate. Our experiments on benchmark dataset confirm that AutoFed substantially improves over status quo approaches in both precision and recall, while demonstrating strong robustness to adverse weather conditions.
翻译:使用机载传感器(如利达、雷达和相机)对物体进行检测,这在自主驾驶(AD)中起着关键作用,这些传感器在模式上相互补充。虽然人群监测有可能利用这些传感器(数量巨大)获得更全面的知识,但看来是达到这一潜力的必要工具:使自动飞行器(AVs)能够培训机器学习模型,而不明确共享原始感官数据。然而,多式联运传感器在分布式AVs(如标签数量偏差和不同模式)中引入了各种数据差异性,对有效的FL提出了严峻的挑战。为此,我们将AutoFed作为异质觉认知FL框架,以充分利用AVs上的多式联运感官数据,从而实现稳健的AD。具体地说,我们首先提出一个新的模型,利用假标签来避免错误地对待不贴标签的物体作为背景。我们还提议一种基于自动编码的数据配置方法,以填补缺失的数据模式(某些AVs的精确度和不同模式),给有效的FL带来严峻的挑战。为此,我们将AutoFeded作为一种异性超度框架框架框架框架框架框架框架框架框架框架框架,以充分使用,从而实现稳性地验证我们目前的客户的精确性测试。我们目前的标准,同时进一步调整他能测试。