A significant portion of driving hazards is caused by human error and disregard for local driving regulations; Consequently, an intelligent assistance system can be beneficial. This paper proposes a novel vision-based modular package to ensure drivers' safety by perceiving the environment. Each module is designed based on accuracy and inference time to deliver real-time performance. As a result, the proposed system can be implemented on a wide range of vehicles with minimum hardware requirements. Our modular package comprises four main sections: lane detection, object detection, segmentation, and monocular depth estimation. Each section is accompanied by novel techniques to improve the accuracy of others along with the entire system. Furthermore, a GUI is developed to display perceived information to the driver. In addition to using public datasets, like BDD100K, we have also collected and annotated a local dataset that we utilize to fine-tune and evaluate our system. We show that the accuracy of our system is above 80% in all the sections. Our code and data are available at https://github.com/Pandas-Team/Autonomous-Vehicle-Environment-Perception
翻译:驾驶危害的大部分原因是由于人为错误和不遵守当地的交通规则所造成的,因此一个智能辅助系统可以大有裨益。本文提出了一个新颖的基于视觉的模块化包,通过感知环境来确保驾驶员的安全。每个模块都是基于准确率和推理时间设计的,以提供实时性能。因此,所提出的系统可以在一个宽泛的车辆范围内使用,并且最小化硬件要求。我们的模块化包括四个主要部分:车道检测、物体检测、分割和单眼深度估计。每个部分都附带着新的技术,以提高其他部分以及整个系统的准确度。此外,我们还开发了一个GUI来显示向驾驶员传达的信息。除了使用公共数据集,如BDD100K,我们还收集和注释了一个本地数据集,用于微调和评估我们的系统。我们表明,我们系统的准确性在所有部分均高于80%。我们的代码和数据可在https://github.com/Pandas-Team/Autonomous-Vehicle-Environment-Perception 上找到。