Perception algorithms in autonomous driving systems confront great challenges in long-tail traffic scenarios, where the problems of Safety of the Intended Functionality (SOTIF) could be triggered by the algorithm performance insufficiencies and dynamic operational environment. However, such scenarios are not systematically included in current open-source datasets, and this paper fills the gap accordingly. Based on the analysis and enumeration of trigger conditions, a high-quality diverse dataset is released, including various long-tail traffic scenarios collected from multiple resources. Considering the development of probabilistic object detection (POD), this dataset marks trigger sources that may cause perception SOTIF problems in the scenarios as key objects. In addition, an evaluation protocol is suggested to verify the effectiveness of POD algorithms in identifying the key objects via uncertainty. The dataset never stops expanding, and the first batch of open-source data includes 1126 frames with an average of 2.27 key objects and 2.47 normal objects in each frame. To demonstrate how to use this dataset for SOTIF research, this paper further quantifies the perception SOTIF entropy to confirm whether a scenario is unknown and unsafe for a perception system. The experimental results show that the quantified entropy can effectively and efficiently reflect the failure of the perception algorithm.
翻译:自动驾驶系统中的感知算法在长尾交通情况中面临巨大的挑战,在长尾交通情况中,由于算法性能不足和动态操作环境,可能引发预期功能安全的问题。然而,目前开放源码数据集中并没有系统地包括这种假设,本文也相应地填补了空白。根据对触发条件的分析和列举,发布了高质量的多种数据集,包括从多种资源收集的各种长尾交通假设。考虑到发展概率性物体探测,这一数据集标记触发源可能会导致将SOTIF的感知问题作为关键对象。此外,还提出了一项评估协议,以核查POD算法在通过不确定性确定关键对象方面的有效性。数据集从未停止扩展,第一批开放源数据包括1126个框架,每个框架平均2.27个关键对象和2.47个正常对象。为了证明如何将这一数据集用于SOTIF的研究,本文件进一步对SOTIF的感知觉动源源进行夸大,以确认是否有效反映实验性模型的错误。