Reinforcement learning (RL) is a promising approach. However, success is limited towards real-world applications, because ensuring safe exploration and facilitating adequate exploitation is a challenge for controlling robotic systems with unknown models and measurement uncertainties. The learning problem becomes even more difficult for complex tasks over continuous state-space and action-space. In this paper, we propose a learning-based robotic control framework consisting of several aspects: (1) we leverage Linear Temporal Logic (LTL) to express complex tasks over an infinite horizons that are translated to a novel automaton structure; (2) we detail an innovative reward scheme for LTL satisfaction with probabilistic guarantee. Then, by applying a reward shaping technique, we develop a modular policy-gradient architecture exploiting the benefits of the automaton structure to decompose overall tasks and enhance the performance of learned controllers; (3) by incorporating Gaussian Processes (GPs) to estimate the uncertain dynamic systems, we synthesize a model-based safe exploration during learning process using Exponential Control Barrier Functions (ECBFs) for systems with high-order relative degrees. (4) to further improve the efficiency of exploration, we utilize the properties of LTL automata and ECBFs to propose a safe guiding process. Finally, we demonstrate the effectiveness of the framework via several robotic environments. We show an ECBF-based modular deep RL algorithm that achieves near-perfect success rates and safety guarding with high probability confidence during training.
翻译:强化学习(RL)是一个很有希望的方法。但是,成功在现实世界应用方面是有限的,因为确保安全探索和促进充分开发是控制具有未知模型和测量不确定性的机器人系统的挑战。学习问题对于连续的州空间和动作空间的复杂任务来说变得更加困难。在本文件中,我们提议一个基于学习的机器人控制框架,由几个方面组成:(1) 我们利用线性时空逻辑(LTL)来表达在无限的视野中完成复杂任务,这些任务被转化为新的自动图案结构;(2) 我们详细列出一个创新的奖励计划,以利低频低频满意度和概率保证。 然后,通过应用奖励制成技术,我们开发一个模块式政策级调整型架构,利用自动图结构的效益来拆分总体任务,提高学习控制者的业绩;(3) 通过纳入高频进程来估计不确定的动态系统,我们综合了在学习过程中以模型为基础进行的安全探索,使用基于超定级控制障碍功能(ECBFS),以进一步提高探索的概率效率,我们利用高频度模型来展示高频度的RFARCF框架。