Planning algorithms are used in computational systems to direct autonomous behavior. In a canonical application, for example, planning for autonomous vehicles is used to automate the static or continuous planning towards performance, resource management, or functional goals (e.g., arriving at the destination, managing fuel fuel consumption). Existing planning algorithms assume non-adversarial settings; a least-cost plan is developed based on available environmental information (i.e., the input instance). Yet, it is unclear how such algorithms will perform in the face of adversaries attempting to thwart the planner. In this paper, we explore the security of planning algorithms used in cyber- and cyber-physical systems. We present two $\textit{adversarial planning}$ algorithms-one static and one adaptive-that perturb input planning instances to maximize cost (often substantially so). We evaluate the performance of the algorithms against two dominant planning algorithms used in commercial applications (D* Lite and Fast Downward) and show both are vulnerable to extremely limited adversarial action. Here, experiments show that an adversary is able to increase plan costs in 66.9% of instances by only removing a single action from the actions space (D* Lite) and render 70% of instances from an international planning competition unsolvable by removing only three actions (Fast Forward). Finally, we show that finding an optimal perturbation in any search-based planning system is NP-hard.
翻译:在计算系统中使用规划算法来引导自主行为。例如,在一种典型的应用中,自主车辆的规划用于将静态或连续规划自动化,以实现性能、资源管理或功能目标(例如,到达目的地,管理燃料消耗);现有的规划算法假定非对抗性环境;根据现有的环境信息(即输入实例)制定成本最低的计划;然而,尚不清楚这种算法在对手试图挫败计划者面前将如何运行。在本文中,我们探索了在网络和网络物理系统中使用的规划算法的安全性。我们提出了两种1美元算法的静态和1美元适应性的投入规划,以最大限度地实现成本(通常相当大)。我们对照商业应用中使用的两种主导性规划算法(D*利特和快速下游)评估了这些算法的性,并表明这两种算法都很容易受到极为有限的对抗性行动。在这里,实验显示,在网络和网络物理系统中,对计划成本增加的保障,66.9%,我们仅通过从空间前方搜索中去除一个动作,而只能从前方搜索中找到一个动作(D*最后显示一个动作)。