Owing to resource limitations, efficient computation systems have long been a critical demand for those designing autonomous vehicles. In addition, sensor cost and size have restricted the development of self-driving cars. To overcome these restrictions, this study proposed an efficient framework for the operation of vision-based automatic vehicles; a front-facing camera and a few inexpensive radars are the required sensors for driving environment perception. The proposed algorithm comprises a multi-task UNet (MTUNet) network for extracting image features and constrained iterative linear quadratic regulator (CILQR) modules for rapid lateral and longitudinal motion planning. The MTUNet is designed to simultaneously solve lane line segmentation, ego vehicle heading angle regression, road type classification, and traffic object detection tasks at an approximate speed of 40 FPS (frames per second) when an RGB image of size 228 x 228 is fed into it. The linear CILQR controllers then apply processed MTUNet outputs and radar data as inputs to produce driving commands for lateral and longitudinal guidance related to autonomous vehicle operation; optimal control problems can be solved within 1 ms. The linear CILQR approaches are more efficient than the standard sequential quadratic programming (SQP) methods and can be combined with MTUNet for autonomous vehicle operation in simulated environments for lane-keeping and car-following maneuvers without the use of high-definition (HD) maps. Our experiments demonstrate that the proposed autonomous driving system is applicable to current automobile technology.
翻译:由于资源有限,高效的计算系统长期以来一直是设计自主车辆的人的关键需求,此外,传感器成本和规模限制了自行驾驶汽车的开发,为克服这些限制,本研究提议了一个高效的框架,用于运行基于视觉的自动车辆;一个前视相照相机和一些廉价的雷达是驱动环境认知所需的传感器;拟议的算法包括一个多重任务UNet(MTUNet)网络,用于提取图像特征,以及用于快速横向和纵向运动规划的迭代线性线性二次调控器模块(CILQRR),用于快速横向和纵向运动规划;MTUNet的设计旨在同时解决行道分段、自负式车辆倾斜角回归、道路类型分类和交通目标探测任务,其速度约为40FPS(每秒框架),用于驱动环境的RGB228x228; 线性CILQR控制器控制器将经处理过的MTUNet产出和雷达数据用作与汽车自主操作相关的驱动指令; 最佳控制问题可在1米内解决最佳控制问题; 直线的CLQRTU操作方法可比标准级机动化车辆的机动化系统更高效地使用。