Extrapolating beyond-demonstrator (BD) through the inverse reinforcement learning (IRL) algorithm aims to learn from and outperform the demonstrator. In sharp contrast to the conventional reinforcement learning (RL) algorithms, BD-IRL can overcome the dilemma incurred in the reward function design and solvability of the RL, which opens new avenues to building superior expert systems. Most existing BD-IRL algorithms are performed in two stages by first inferring a reward function before learning a policy via RL. However, such two-stage BD-IRL algorithms suffer from high computational complexity, low robustness and large performance variations. In particular, a poor reward function founded in the first stage will inevitably incur severe performance loss in the second stage. In this work, we propose a hybrid adversarial inverse reinforcement learning (HAIRL) algorithm that is one-stage, model-free, generative-adversarial (GA) fashion and curiosity-driven. Thanks to the one-stage design, the HAIRL can integrate reward function learning and policy optimization into one procedure, which leads to many advantages such as low computational complexity, high robustness, and strong adaptability. More specifically, HAIRL simultaneously imitates the demonstrator and explores BD performance by utilizing hybrid rewards. In particular, the Wasserstein distance (WD) is introduced in HAIRL to stabilize the imitation procedure while a novel end-to-end curiosity module (ECM) is developed to improve exploration. Finally, extensive simulation results confirm that HAIRL can achieve higher performance as compared to other similar BD-IRL algorithms.
翻译:与常规强化学习(RL)算法形成鲜明对比的是,BD-IRL可以克服奖励功能设计中出现的两难困境和RL的溶解性,这为建立高级专家系统开辟了新的途径。现有的BD-IRL算法在通过RL学习一项政策之前,先先推断奖励功能,分两个阶段进行。然而,这种两阶段的BD-IRL算法由于计算复杂性高、稳健度低和绩效差异大而受到影响。特别是,与常规强化学习(RL)算法形成鲜明对比,BD-IR在第二阶段的奖励性工作将不可避免地导致严重的绩效损失。在这项工作中,我们建议一种混合的反向强化学习(HAIR)算法是一阶段的、无模型的、有色调的(GA)时尚和有好奇力驱动的。由于一阶段的设计,HAIR可以将奖励性学习和政策优化功能纳入一个程序,这将导致许多优势,例如低度的SAR(HAL)最后的模拟性变现,具体来说,通过低度的深度的MAL IMR(HAL) IML) 的演算法的精度,通过高的精度的精度,使BRRRB-RRRBRB-R) 得到一种特殊的精度的精度的精度,具体地改进。