Autonomous vehicles have limited computational resources; hence, their control systems must be efficient. The cost and size of sensors have limited the development of self-driving cars. To overcome these restrictions, this study proposes an efficient framework for the operation of vision-based automatic vehicles; the framework requires only a monocular camera and a few inexpensive radars. The proposed algorithm comprises a multi-task UNet (MTUNet) network for extracting image features and constrained iterative linear quadratic regulator (CILQR) and vision predictive control (VPC) modules for rapid motion planning and control. MTUNet is designed to simultaneously solve lane line segmentation, the ego vehicle's heading angle regression, road type classification, and traffic object detection tasks at approximately 40 FPS (frames per second) for 228 x 228 pixel RGB input images. The CILQR controllers then use the MTUNet outputs and radar data as inputs to produce driving commands for lateral and longitudinal vehicle guidance within only 1 ms. In particular, the VPC algorithm is included to reduce steering command latency to below actuator latency to prevent vehicle understeer during tight turns. The VPC algorithm uses road curvature data from MTUNet to estimate the correction of the current steering angle at a look-ahead point to adjust the turning amount. Including the VPC algorithm in a VPC-CILQR controller leads to higher performance than CILQR alone; this controller can minimize the influence of command lag, maintaining the ego car's speed and lateral offset at 76 km/h and within 0.52 m, respectively, on a simulated road with a curvature of 0.03 1/m. Our experiments demonstrate that the proposed autonomous driving system, which does not require high-definition maps, could be applied in current autonomous vehicles.
翻译:自动车辆的计算资源有限;因此,其控制系统必须效率;传感器的成本和规模限制了自驾驶汽车的开发。为克服这些限制,本研究提出一个高效框架,用于视像自动车辆的运作;框架仅需要一台单视相机和少量廉价雷达。拟议算法包括一个多塔斯克UNet(MTUNet)网络,用于提取图像特征和限制迭代线性线性二次调节器(CILQR)和用于快速动作规划和控制的视觉预测控制模块。MTUNet设计的目的是同时解决行道路路路路路段分割、自控车车头角度回归、道路型分类和交通目标检测任务,大约40 FPS(每秒框架)需要一台单视镜相机和几台低调雷达。 CILQ控制器随后使用MTUNet输出和雷达数据作为输入,用于在1米内生成后端和长式车辆导航控制器指令。 VPC的后方算法用于将驾驶指挥拉特的拉特调调控制器到低于动作轴线线路路段路段路段路段、路段内直径直路路路路路路路路路路路路路段驱动数据在快速变变变压数据中需要调整。</s>