Anticipating lane change intentions of surrounding vehicles is crucial for efficient and safe driving decision making in an autonomous driving system. Previous works often adopt physical variables such as driving speed, acceleration and so forth for lane change classification. However, physical variables do not contain semantic information. Although 3D CNNs have been developing rapidly, the number of methods utilising action recognition models and appearance feature for lane change recognition is low, and they all require additional information to pre-process data. In this work, we propose an end-to-end framework including two action recognition methods for lane change recognition, using video data collected by cameras. Our method achieves the best lane change classification results using only the RGB video data of the PREVENTION dataset. Class activation maps demonstrate that action recognition models can efficiently extract lane change motions. A method to better extract motion clues is also proposed in this paper.
翻译:在自动驾驶系统中,预期周围车辆的车道改变意图对于高效和安全驾驶决策至关重要。以前的工作往往采用车速、加速等物理变量来进行车道改变分类;然而,物理变量并不包含语义信息。虽然3D有线电视新闻网一直在迅速发展,但使用行动识别模型和车道改变识别外观特征的方法数量很少,它们都需要更多预处理数据信息。在这项工作中,我们提议了一个端对端框架,包括使用摄像头收集的视频数据来识别车道改变识别的两种行动识别方法。我们的方法仅利用防暴数据集的RGB视频数据才能取得最佳车道改变分类结果。类激活地图表明,行动识别模型能够有效提取车道改变动作动作。本文还提出了更好地提取运动线索的方法。