The task of driver attention prediction has drawn considerable interest among researchers in robotics and the autonomous vehicle industry. Driver attention prediction can play an instrumental role in mitigating and preventing high-risk events, like collisions and casualties. However, existing driver attention prediction models neglect the distraction state and intention of the driver, which can significantly influence how they observe their surroundings. To address these issues, we present a new driver attention dataset, CoCAtt (Cognitive-Conditioned Attention). Unlike previous driver attention datasets, CoCAtt includes per-frame annotations that describe the distraction state and intention of the driver. In addition, the attention data in our dataset is captured in both manual and autopilot modes using eye-tracking devices of different resolutions. Our results demonstrate that incorporating the above two driver states into attention modeling can improve the performance of driver attention prediction. To the best of our knowledge, this work is the first to provide autopilot attention data. Furthermore, CoCAtt is currently the largest and the most diverse driver attention dataset in terms of autonomy levels, eye tracker resolutions, and driving scenarios. CoCAtt is available for download at https://cocatt-dataset.github.io.
翻译:驱动器注意预测的任务引起了机器人和自主汽车业研究人员的极大兴趣; 驱动器注意预测可以在减轻和预防诸如碰撞和伤亡等高风险事件方面发挥重要作用; 然而,现有的驱动器注意预测模型忽视了驱动器的分散状态和意图,而这种状态和意图会大大影响其观察周围环境。 为了解决这些问题,我们提出了一个新的驱动器注意数据集,即CoCatt (有意识的注意) 。不同于以往的驱动器注意数据集, CoCatt 包含描述驱动器分散状态和意图的每个框架说明。此外,我们的数据集中的注意数据使用不同分辨率的跟踪装置以手动和自动驾驶模式记录。我们的结果表明,将上述两个驱动器纳入注意模型可以改进驱动器注意预测的性能。据我们所知,这项工作是第一个提供自动驾驶注意数据的工作。此外,CoCatt是目前最大和最多样化的驱动器注意数据集,说明驱动器的驱动器注意状态和意图。 此外,在自主级别、眼睛追踪器分辨率和驾驶场景方面。可下载的CoCattutiot。