Explainable AI, in the context of autonomous systems, like self-driving cars, has drawn broad interests from researchers. Recent studies have found that providing explanations for autonomous vehicles' actions has many benefits (e.g., increased trust and acceptance), but put little emphasis on when an explanation is needed and how the content of explanation changes with driving context. In this work, we investigate which scenarios people need explanations and how the critical degree of explanation shifts with situations and driver types. Through a user experiment, we ask participants to evaluate how necessary an explanation is and measure the impact on their trust in self-driving cars in different contexts. Moreover, we present a self-driving explanation dataset with first-person explanations and associated measures of the necessity for 1103 video clips, augmenting the Berkeley Deep Drive Attention dataset. Our research reveals that driver types and driving scenarios dictate whether an explanation is necessary. In particular, people tend to agree on the necessity for near-crash events but hold different opinions on ordinary or anomalous driving situations.
翻译:在自驾汽车等自主系统的背景下,可解释的大赦国际吸引了研究人员的广泛兴趣。最近的研究发现,为自驾驶车辆的行动提供解释有许多好处(例如,增加信任和接受程度),但很少强调何时需要解释,解释的内容随驾驶环境的变化如何变化。在这项工作中,我们调查了哪些假设情况需要解释,以及解释与情况和驾驶者类型发生何种关键程度的变化。我们通过用户实验,请与会者评估解释的必要性,并衡量对在不同情况下自驾驶汽车的信任的影响。此外,我们提出了一个自驾驶的解释数据集,配有第一人的解释,以及相关的衡量标准,说明1 103个视频剪片的必要性,增强伯克利深车注意数据集。我们的研究表明,驾驶者类型和驾驶情景决定了是否需要解释。特别是,人们倾向于同意发生近乎崩溃事件的必要性,但对普通或异常驾驶情况持有不同的看法。