With increasing automation in passenger vehicles, the study of safe and smooth occupant-vehicle interaction and control transitions is key. In this study, we focus on the development of contextual, semantically meaningful representations of the driver state, which can then be used to determine the appropriate timing and conditions for transfer of control between driver and vehicle. To this end, we conduct a large-scale real-world controlled data study where participants are instructed to take-over control from an autonomous agent under different driving conditions while engaged in a variety of distracting activities. These take-over events are captured using multiple driver-facing cameras, which when labelled result in a dataset of control transitions and their corresponding take-over times (TOTs). We then develop and train TOT models that operate sequentially on mid to high-level features produced by computer vision algorithms operating on different driver-facing camera views. The proposed TOT model produces continuous predictions of take-over times without delay, and shows promising qualitative and quantitative results in complex real-world scenarios.
翻译:随着客车自动化程度的提高,对安全、顺利机动车辆相互作用和控制过渡的研究是关键所在。在本研究中,我们侧重于开发驾驶员状态的符合情境、具有真实意义的表达方式,然后可以用来确定驾驶员和车辆之间转移控制的适当时间和条件。为此,我们进行了大规模真实世界控制的数据研究,指示参加者在不同的驾驶条件下,在不同的驾驶条件下,在从事各种分散注意力的活动时,从自主代理上接管控制。这些接管事件是用多部显示驾驶员的照相机拍摄的,这些照相机在贴上标签后,产生了控制过渡的数据集及其相应的占用时间(TOTs)。然后,我们开发和培训了按不同驾驶员和摄影机视角操作的计算机视觉算法所产生的中高层次特征的TOT模型。拟议的TOT模型可以毫不迟延地连续预测占用时间,并在复杂的现实世界情景中显示有希望的定性和定量结果。