When autonomous robots interact with humans, such as during autonomous driving, explicit safety guarantees are crucial in order to avoid potentially life-threatening accidents. Many data-driven methods have explored learning probabilistic bounds over human agents' trajectories (i.e. confidence tubes that contain trajectories with probability $\delta$), which can then be used to guarantee safety with probability $1-\delta$. However, almost all existing works consider $\delta \geq 0.001$. The purpose of this paper is to argue that (1) in safety-critical applications, it is necessary to provide safety guarantees with $\delta < 10^{-8}$, and (2) current learning-based methods are ill-equipped to compute accurate confidence bounds at such low $\delta$. Using human driving data (from the highD dataset), as well as synthetically generated data, we show that current uncertainty models use inaccurate distributional assumptions to describe human behavior and/or require infeasible amounts of data to accurately learn confidence bounds for $\delta \leq 10^{-8}$. These two issues result in unreliable confidence bounds, which can have dangerous implications if deployed on safety-critical systems.
翻译:当自主机器人与人类发生相互作用时,例如在自主驾驶期间,明确的安全保障对于避免潜在危及生命的事故至关重要。许多数据驱动的方法探索了人类制剂轨迹(即含有轨道的置信管,可能为$\delta$)的概率为1美元/delta$,然后可用于保障安全。然而,几乎所有现有工程都考虑美元/delta\geq 0.001美元。本文的目的是为了论证(1) 在安全关键应用中,有必要以$\delta < 10\\\\\\\\\\\\8}美元提供安全保障,而(2)目前基于学习的方法无法以如此低的美元/delta$进行准确的置信范围。我们使用人驾驶数据(来自高D数据集)以及合成生成的数据,我们表明目前的不确定性模型使用不准确的分布假设来描述人类行为和/或要求大量不可行的数据来准确学习$\delta\\\\\ 10\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\