Extrapolating beyond-demonstrator (BD) through the inverse reinforcement learning (IRL) algorithm aims to learn from and outperform the demonstrator. In sharp contrast to the conventional reinforcement learning (RL) algorithms, BD-IRL can overcome the dilemma incurred in the reward function design and improve the exploration mechanism of RL, which opens new avenues to building superior expert systems. Most existing BD-IRL algorithms are performed in two stages by first inferring a reward function before learning a policy via RL. However, such two-stage BD-IRL algorithms suffer from high computational complexity, weak robustness, and large performance variations. In particular, a poor reward function derived in the first stage will inevitably incur severe performance loss in the second stage. In this work, we propose a hybrid adversarial inverse reinforcement learning (HAIRL) algorithm that is one-stage, model-free, generative-adversarial (GA) fashion and curiosity-driven. Thanks to the one-stage design, the HAIRL can integrate both the reward function learning and the policy optimization into one procedure, which leads to many advantages such as low computational complexity, high robustness, and strong adaptability. More specifically, HAIRL simultaneously imitates the demonstrator and explores BD performance by utilizing hybrid rewards. In particular, the Wasserstein-1 distance (WD) is introduced into HAIRL to stabilize the imitation procedure while a novel end-to-end curiosity module (ECM) is developed to improve the exploration. Finally, extensive simulation results confirm that HAIRL can achieve higher performance as compared to other similar BD-IRL algorithms. Our code is available at our GitHub website \footnote{\url{https://github.com/yuanmingqi/HAIRL}}.
翻译:通过反强化学习(IRL)算法,在外加外加外加码(BD) (BD) (BD) (BD) (BD) (BD) (BD) (BD) (BD) 与常规强化学习(RL) (RL) 算法形成鲜明对比,BD-IRL(BD) 能够克服奖励功能设计过程中出现的两难困境,改进RL(RL) 的探索机制,为建立高级专家系统开辟了新的途径。大多数现有的BD-IRL(BD) 算法在通过RL(RL) 学习政策之前先先推断奖赏功能。然而,这种两阶段的BD-IR(IR) 算法(BD-IR) (I) (ID) (ID) (HAIR(I) (I) (HAD) (HD) (HAD) (O) (OD) (OD(I(I) (O) (I(I) (O) (I(I) (IL) (IL) (SDL) (SD) (SD) (SD(I(I) (I(I) (I) (I) (IL) (I(I) (I) (I(I) (O) (SD) (I) (I) (I) (I) (I) (I) (O) (S) (S) (SD) (S) (S) (S) (S) (SD) (SD) (SD) (的) (S) (S) (S) (S) (S) (SD) (S) (SD) (S) (S) (SD) (S) (S) (S) (S) (的) (S) (SD) (D) (SD) (S) (SD) (SD) (I (D) (D) (S) ) (S) (S) (S) ) (D) (D) ) (S) (S) () (S) (S) (S) (S) (S) ) (S) (S) (D) (D) (的 ) (