The task of driver attention prediction has drawn considerable interest among researchers in robotics and the autonomous vehicle industry. Driver attention prediction can play an instrumental role in mitigating and preventing high-risk events, like collisions and casualties. However, existing driver attention prediction models neglect the distraction state and intention of the driver, which can significantly influence how they observe their surroundings. To address these issues, we present a new driver attention dataset, CoCAtt (Cognitive-Conditioned Attention). Unlike previous driver attention datasets, CoCAtt includes per-frame annotations that describe the distraction state and intention of the driver. In addition, the attention data in our dataset is captured in both manual and autopilot modes using eye-tracking devices of different resolutions. Our results demonstrate that incorporating the above two driver states into attention modeling can improve the performance of driver attention prediction. To the best of our knowledge, this work is the first to provide autopilot attention data. Furthermore, CoCAtt is currently the largest and the most diverse driver attention dataset in terms of autonomy levels, eye tracker resolutions, and driving scenarios.
翻译:驱动器注意预测的任务引起了机器人和自主汽车业研究人员的极大兴趣; 驱动器注意预测在减轻和预防碰撞和伤亡等高风险事件方面可发挥重要作用; 然而,现有的驱动器注意预测模型忽视了驱动器的分散状态和意图,这可能大大影响其观察周围环境的方式; 为了解决这些问题,我们提出了一个新的驱动器注意数据集,即CoCatt(有意识的注意) 。不同于以往的驱动器注意数据集, CoCatt 包含描述驱动器分散状态和意图的每个框架说明; 此外,我们的数据集中的注意数据在使用不同分辨率的跟踪装置的手动和自动驾驶模式中都捕捉到。我们的结果表明,将上述两个驱动器状态纳入注意模型可以改进驱动器注意预测的性能。 据我们所知,这项工作首先提供自动注意数据。 此外,CoCatt目前是最大和最多样化的驱动器注意数据集,在自主水平、跟踪器分辨率和驾驶场景方面。