Owing to resource limitations, efficient computation systems have long been a critical demand for those designing autonomous vehicles. Additionally, sensor cost and size restrict the development of self-driving cars. This paper presents an efficient framework for the operation of vision-based automatic vehicles; a front-facing camera and a few inexpensive radars are the required sensors for driving environment perception. The proposed algorithm comprises a multi-task UNet (MTUNet) network for extracting image features and constrained iterative linear quadratic regulator (CILQR) modules for rapid lateral and longitudinal motion planning. The MTUNet is designed to simultaneously solve lane line segmentation, ego vehicle heading angle regression, road type classification, and traffic object detection tasks at an approximate speed of 40 FPS when an RGB image of size 228 x 228 is fed into it. The CILQR algorithms then take processed MTUNet outputs and radar data as their input to produce driving commands for lateral and longitudinal vehicle automation guidance; both optimal control problems can be solved within 1 ms. The proposed CILQR controllers are shown to be more efficient than the sequential quadratic programming (SQP) methods and can collaborate with the MTUNet to drive a car autonomously in unseen simulation environments for lane-keeping and car-following maneuvers. Our experiments demonstrate that the proposed autonomous driving system is applicable to modern automobiles.
翻译:由于资源有限,高效的计算系统长期以来一直是设计自主车辆的人的关键需求。此外,传感器成本和规模限制了自驾驶汽车的开发。本文为基于视觉的自动车辆的运作提供了一个高效框架;前视照相机和一些廉价的雷达是驱动环境感知所需的传感器;拟议的算法包括用于提取图像特征的多塔斯克UNet(MTUNet)网络(MTUNet)和用于快速横向和纵向运动规划的受限制的迭代线性线性四级调节器模块(CILQR)模块。MTUNet的设计旨在同时解决行道分割、自驾驶车头角回归、道路类型分类和交通物体探测任务,在RGB228x228的图像输入时,其速度约为40FPS。CILQR算法随后将处理过的MTUNet输出和雷达数据作为它们为后视和纵向车辆自动化指导提供驾驶指令的投入;两种最佳控制问题都可以在1米内解决。拟议的CILQR控制器显示,其效率要高于用于汽车自主驾驶的连续二次驱动系统程序,以模拟方式进行汽车驾驶。