Seamlessly integrating rules in Learning-from-Demonstrations (LfD) policies is a critical requirement to enable the real-world deployment of AI agents. Recently, Signal Temporal Logic (STL) has been shown to be an effective language for encoding rules as spatio-temporal constraints. This work uses Monte Carlo Tree Search (MCTS) as a means of integrating STL specification into a vanilla LfD policy to improve constraint satisfaction. We propose augmenting the MCTS heuristic with STL robustness values to bias the tree search towards branches with higher constraint satisfaction. While the domain-independent method can be applied to integrate STL rules online into any pre-trained LfD algorithm, we choose goal-conditioned Generative Adversarial Imitation Learning as the offline LfD policy. We apply the proposed method to the domain of planning trajectories for General Aviation aircraft around a non-towered airfield. Results using the simulator trained on real-world data showcase 60% improved performance over baseline LfD methods that do not use STL heuristics.
翻译:将规则完全纳入从实践中学习的演示(LfD)政策是一项关键要求,以便真正部署AI代理。 最近,信号时间逻辑(STL)被证明是编码规则的有效语言,作为时空限制。 这项工作利用蒙特卡洛树搜索(MCTS)将STL规格纳入香草LfD政策,以提高对限制的满意度。 我们提议用STL强力值加强MCTS的超强性能,使树搜索偏向限制满意度较高的分支。 虽然可应用域独立法将STL规则纳入任何预先培训的LfD算法,但我们选择了目标条件优异的基因模仿学习作为离线LfD政策。 我们将拟议方法应用于规划普通航空飞机在非偏僻机场的轨迹领域。 使用实际数据模拟器的结果展示了比不使用STLuristics的基准LfD方法改进了60%的性能。</s>