There are spatio-temporal rules that dictate how robots should operate in complex environments, e.g., road rules govern how (self-driving) vehicles should behave on the road. However, seamlessly incorporating such rules into a robot control policy remains challenging especially for real-time applications. In this work, given a desired spatio-temporal specification expressed in the Signal Temporal Logic (STL) language, we propose a semi-supervised controller synthesis technique that is attuned to human-like behaviors while satisfying desired STL specifications. Offline, we synthesize a trajectory-feedback neural network controller via an adversarial training scheme that summarizes past spatio-temporal behaviors when computing controls, and then online, we perform gradient steps to improve specification satisfaction. Central to the offline phase is an imitation-based regularization component that fosters better policy exploration and helps induce naturalistic human behaviors. Our experiments demonstrate that having imitation-based regularization leads to higher qualitative and quantitative performance compared to optimizing an STL objective only as done in prior work. We demonstrate the efficacy of our approach with an illustrative case study and show that our proposed controller outperforms a state-of-the-art shooting method in both performance and computation time.
翻译:在这项工作中,鉴于信号时空逻辑(STL)语言中表达的预期时空规格,我们建议采用半监督控制器合成技术,既适应人类行为又满足人们期望的STL规格。离线,我们通过对抗性培训计划合成了轨迹-后退神经网络控制器,该培训计划总结了以往计算机控制时的时空行为,然后在网上,我们采取梯度步骤来提高规格满意度。在离线阶段中,我们的核心是一个模拟式的规范化部分,它促进更好的政策探索,有助于产生自然的人类行为。我们的实验表明,以模拟为基础的规范化能够提高质量和数量性能,而仅像以前的工作那样优化STL目标。我们用一个说明性案例研究和模拟性能控制器展示了我们方法的功效,并展示了我们所提出的一个模拟性能分析方法。