In high-level Autonomous Driving (AD) systems, behavioral planning is in charge of making high-level driving decisions such as cruising and stopping, and thus highly securitycritical. In this work, we perform the first systematic study of semantic security vulnerabilities specific to overly-conservative AD behavioral planning behaviors, i.e., those that can cause failed or significantly-degraded mission performance, which can be critical for AD services such as robo-taxi/delivery. We call them semantic Denial-of-Service (DoS) vulnerabilities, which we envision to be most generally exposed in practical AD systems due to the tendency for conservativeness to avoid safety incidents. To achieve high practicality and realism, we assume that the attacker can only introduce seemingly-benign external physical objects to the driving environment, e.g., off-road dumped cardboard boxes. To systematically discover such vulnerabilities, we design PlanFuzz, a novel dynamic testing approach that addresses various problem-specific design challenges. Specifically, we propose and identify planning invariants as novel testing oracles, and design new input generation to systematically enforce problemspecific constraints for attacker-introduced physical objects. We also design a novel behavioral planning vulnerability distance metric to effectively guide the discovery. We evaluate PlanFuzz on 3 planning implementations from practical open-source AD systems, and find that it can effectively discover 9 previouslyunknown semantic DoS vulnerabilities without false positives. We find all our new designs necessary, as without each design, statistically significant performance drops are generally observed. We further perform exploitation case studies using simulation and real-vehicle traces. We discuss root causes and potential fixes.
翻译:在高级自动驾驶系统(AD)中,行为规划负责做出高层次驾驶决定,如巡航和停止,从而高度安全性。在这项工作中,我们首次系统地研究过度保守的AD行为规划行为所特有的语义安全脆弱性,即可能造成失败或大幅降级的任务性能,这对于诸如Robo-taxi/交付等反倾销服务至关重要。我们称之为它们。我们称之为语义拒绝服务的脆弱性,我们设想这些脆弱性在实用的AD系统中通常会最普遍暴露于这种风险,因为保守倾向避免安全事件。为了实现高度实用性和现实主义,我们假设攻击者只能将表面坚固的外部物理物体引入驱动环境,例如,离线倾弃的纸板箱。为了系统发现这种脆弱性,我们设计PlanFuzz(PlanFuzz),一种新颖的动态测试方法可以解决各种特定问题的设计挑战。我们发现,我们发现并发现新的变量计划是没有新需要的测试或触摸的物体,并且设计新的投入产生新的数据生成,并设计新的数据生成系统来系统地实施前期的智能设计。