We present AIRS: Automatic Intrinsic Reward Shaping that intelligently and adaptively provides high-quality intrinsic rewards to enhance exploration in reinforcement learning (RL). More specifically, AIRS selects shaping function from a predefined set based on the estimated task return in real-time, providing reliable exploration incentives and alleviating the biased objective problem. Moreover, we develop an intrinsic reward toolkit to provide efficient and reliable implementations of diverse intrinsic reward approaches. We test AIRS on various tasks of Procgen games and DeepMind Control Suite. Extensive simulation demonstrates that AIRS can outperform the benchmarking schemes and achieve superior performance with simple architecture.
翻译:我们介绍ARS: 自动内在奖赏: 智能和适应性地提供高质量内在奖赏的自动内在奖赏,以加强强化学习的探索。 更具体地说,AIRS根据实时估计任务回报率从预先确定的一套功能中选择成型功能,提供可靠的勘探奖励,并减轻偏向客观问题。 此外,我们开发了一个内在奖赏工具包,以高效可靠地实施各种内在奖赏办法。我们测试了Progen游戏和深意识控制套件的各种任务。广泛的模拟表明,AIRS能够超越基准计划,以简单建筑实现优异业绩。