Uncertain partially observable Markov decision processes (uPOMDPs) allow the probabilistic transition and observation functions of standard POMDPs to belong to a so-called uncertainty set. Such uncertainty, referred to as epistemic uncertainty, captures uncountable sets of probability distributions caused by, for instance, a lack of data available. We develop an algorithm to compute finite-memory policies for uPOMDPs that robustly satisfy specifications against any admissible distribution. In general, computing such policies is theoretically and practically intractable. We provide an efficient solution to this problem in four steps. (1) We state the underlying problem as a nonconvex optimization problem with infinitely many constraints. (2) A dedicated dualization scheme yields a dual problem that is still nonconvex but has finitely many constraints. (3) We linearize this dual problem and (4) solve the resulting finite linear program to obtain locally optimal solutions to the original problem. The resulting problem formulation is exponentially smaller than those resulting from existing methods. We demonstrate the applicability of our algorithm using large instances of an aircraft collision-avoidance scenario and a novel spacecraft motion planning case study.
翻译:不确定部分可见的Markov决定程序(uPOMDPs)允许标准POMDPs的概率过渡和观察功能属于所谓的不确定性组。这种不确定性被称为隐性不确定性,捕捉出因缺乏数据等原因而导致的无法计算的几组概率分布。我们开发了一种算法,用以计算uPOMDPs的有限模数政策,以稳健地满足任何可接受分布的规格。一般而言,计算这种政策在理论上和实际上都是难以解决的。我们用四个步骤为这一问题提供一个有效的解决办法。(1) 我们用无限多的限制因素将潜在的问题描述为非隐性优化问题。 (2) 专用的双重化计划产生了一个仍然无法解析但有有限许多限制的双重问题。(3) 我们将这一双重问题线性计划线性计划线性计划线性地计算成本地最佳解决原始问题的办法。由此产生的问题配方比现有方法产生的方案要小得多。我们用大量飞机碰撞避免情景和新的航天器移动规划案例研究来证明我们的算法的适用性。