Pedestrian safety is one primary concern in autonomous driving. The under-representation of vulnerable groups in today's pedestrian datasets points to an urgent need for a dataset of vulnerable road users. In this paper, we first introduce a new vulnerable pedestrian detection dataset, BG Vulnerable Pedestrian (BGVP) dataset to help train well-rounded models and thus induce research to increase the efficacy of vulnerable pedestrian detection. The dataset includes four classes, i.e., Children Without Disability, Elderly without Disability, With Disability, and Non-Vulnerable. This dataset consists of images collected from the public domain and manually-annotated bounding boxes. In addition, on the proposed dataset, we have trained and tested five state-of-the-art object detection models, i.e., YOLOv4, YOLOv5, YOLOX, Faster R-CNN, and EfficientDet. Our results indicate that YOLOX and YOLOv4 perform the best on our dataset, YOLOv4 scoring 0.7999 and YOLOX scoring 0.7779 on the mAP 0.5 metric, while YOLOX outperforms YOLOv4 by 3.8 percent on the mAP 0.5:0.95 metric. Generally speaking, all five detectors do well predicting the With Disability class and perform poorly in the Elderly Without Disability class. YOLOX consistently outperforms all other detectors on the mAP (0.5:0.95) per class metric, obtaining 0.5644, 0.5242, 0.4781, and 0.6796 for Children Without Disability, Elderly Without Disability, Non-vulnerable, and With Disability, respectively. Our dataset and codes are available at https://github.com/devvansh1997/BGVP.
翻译:Pedistrian(BGVP) 数据集, 帮助培训周密模型, 从而促使研究提高脆弱行人检测的功效。 数据集包括四类, 即无残疾儿童、无残疾老年人、无残疾老年人、无残疾老年人和不可变性。 该数据集包括从公共领域收集的图像和手动附加说明的内装标准箱。 此外, 在拟议的数据集中,我们培训和测试了五种新的脆弱行人检测数据集,即BG G Delvious Pedestrian(BGVP) 数据集, 以帮助培训周密模型, 从而促使研究提高脆弱行人检测的功效。 数据集包括四类, 即无残疾儿童、无残疾老年人、无残疾老年人和无生命力者等。 这个数据集包括从公共领域收集的图像和手动附加说明的内装数据集。 这个数据集包括从公共领域收集的图像和手动附加说明的内装数据集。 此外, 我们培训和测试了五个最先进的残疾对象(YOLOX), 在五年级、五年半OOL OLOI 上连续进行自动进行自动的自动自动的残疾数据。