During recent years, deep reinforcement learning (DRL) has made successful incursions into complex decision-making applications such as robotics, autonomous driving or video games. Off-policy algorithms tend to be more sample-efficient than their on-policy counterparts, and can additionally benefit from any off-policy data stored in the replay buffer. Expert demonstrations are a popular source for such data: the agent is exposed to successful states and actions early on, which can accelerate the learning process and improve performance. In the past, multiple ideas have been proposed to make good use of the demonstrations in the buffer, such as pretraining on demonstrations only or minimizing additional cost functions. We carry on a study to evaluate several of these ideas in isolation, to see which of them have the most significant impact. We also present a new method for sparse-reward tasks, based on a reward bonus given to demonstrations and successful episodes. First, we give a reward bonus to the transitions coming from demonstrations to encourage the agent to match the demonstrated behaviour. Then, upon collecting a successful episode, we relabel its transitions with the same bonus before adding them to the replay buffer, encouraging the agent to also match its previous successes. The base algorithm for our experiments is the popular Soft Actor-Critic (SAC), a state-of-the-art off-policy algorithm for continuous action spaces. Our experiments focus on manipulation robotics, specifically on a 3D reaching task for a robotic arm in simulation. We show that our method SACR2 based on reward relabeling improves the performance on this task, even in the absence of demonstrations.
翻译:近些年来,深入强化学习(DRL)成功地侵入了复杂的决策应用,如机器人、自主驾驶或视频游戏等。非政策算法往往比政策对应方更具有样本效率,并且能够从重播缓冲中储存的任何非政策数据中获得更多好处。专家演示是这类数据的流行来源:代理人暴露在成功的州和行动上,从而可以加快学习过程和改善业绩。过去,曾提出多种想法,以便很好地利用缓冲中的演示,例如仅对演示进行预培训或尽量减少额外的成本功能。我们正在进行一项研究,孤立地评估其中的一些想法,看其中哪些想法具有最显著的影响。我们还根据对演示和成功事件给予的奖励奖励奖金,提出了一种稀释任务的新方法。首先,我们给从演示到鼓励代理人与所展示的行为相匹配的过渡,然后,在收集成功事件后,我们又将其过渡的奖金重新标注在相同的奖金上,然后将其添加到缓放缓冲,鼓励代理人与以往的模拟方法相比,看看哪些想法产生最重大的影响。我们提出的稀松动的变换任务,就是我们AALLLA的试算法,这是我们不断的机算法的试算法,让我们的机程的试算,让我们的试算。一个不断改进的机的机的变换算法,让我们的机程,让我们的机程的机程的机程的机程的机程的机程的机程的机。