Cooperative perception allows a Connected Autonomous Vehicle (CAV) to interact with the other CAVs in the vicinity to enhance perception of surrounding objects to increase safety and reliability. It can compensate for the limitations of the conventional vehicular perception such as blind spots, low resolution, and weather effects. An effective feature fusion model for the intermediate fusion methods of cooperative perception can improve feature selection and information aggregation to further enhance the perception accuracy. We propose adaptive feature fusion models with trainable feature selection modules. One of our proposed models Spatial-wise Adaptive feature Fusion (S-AdaFusion) outperforms all other State-of-the-Arts (SOTAs) on two subsets of the OPV2V dataset: Default CARLA Towns for vehicle detection and the Culver City for domain adaptation. In addition, previous studies have only tested cooperative perception for vehicle detection. A pedestrian, however, is much more likely to be seriously injured in a traffic accident. We evaluate the performance of cooperative perception for both vehicle and pedestrian detection using the CODD dataset. Our architecture achieves higher Average Precision (AP) than other existing models for both vehicle and pedestrian detection on the CODD dataset. The experiments demonstrate that cooperative perception also improves the pedestrian detection accuracy compared to the conventional single vehicle perception process.
翻译:合作感知使连接自治飞行器(CAV)能够与其他周边的CAV互动,提高对周围物体的认识,以提高安全和可靠性;它可以补偿常规车辆感知的局限性,如盲点、低分辨率和天气效应; 中间合作感知聚合方法的有效特性聚合模型可以改进特征选择和信息汇总,以进一步提高感知准确性; 我们提出了具有可训练特征选择模块的适应性特征聚合模型; 我们提出的模型之一:空间-明智适应性融合(S-AdaFusion)优于OPV2V数据集的两个子集的所有其他国家艺术(SOTA); 默认CARLA镇用于车辆检测,Culver市用于地区适应; 此外,先前的研究只测试了车辆检测的合作感知; 然而,行人更有可能在交通事故中受到严重伤害; 我们评估了使用CODD数据集对车辆和行人探测的合作感知性效果。 我们的建筑比其他车辆的车辆探测率平均精度(AP)更高,比其他车辆的行车感知性模型都显示车辆的行进率。