We present a model-free reinforcement learning algorithm to find an optimal policy for a finite-horizon Markov decision process while guaranteeing a desired lower bound on the probability of satisfying a signal temporal logic (STL) specification. We propose a method to effectively augment the MDP state space to capture the required state history and express the STL objective as a reachability objective. The planning problem can then be formulated as a finite-horizon constrained Markov decision process (CMDP). For a general finite horizon CMDP problem with unknown transition probability, we develop a reinforcement learning scheme that can leverage any model-free RL algorithm to provide an approximately optimal policy out of the general space of non-stationary randomized policies. We illustrate the effectiveness of our approach in the context of robotic motion planning for complex missions under uncertainty and performance objectives.
翻译:我们提出了一个无模型的强化学习算法,以便为一个限定时间逻辑(STL)规格满足信号时间逻辑(STL)要求的概率提供理想的较低约束,同时保证满足信号时间逻辑(STL)规格的可能性;我们提出了一个有效扩大MDP国家空间的方法,以捕捉所需的国家历史并将STL目标表述为可实现的目标;然后,规划问题可以作为一个限定时间-horizon制约的Markov决定程序(CMDP)来制定。对于一个具有未知过渡概率的通用的有限地平线 CMDP问题,我们开发了一个强化学习计划,利用任何无模型的RL算法在非静止随机化政策的一般空间之外提供一种大约最佳的政策;我们展示了我们在根据不确定性和性能目标对复杂任务进行机器人运动规划方面的做法的有效性。