This paper proposes a method for prioritizing the replay experience referred to as Hindsight Goal Ranking (HGR) in overcoming the limitation of Hindsight Experience Replay (HER) that generates hindsight goals based on uniform sampling. HGR samples with higher probability on the states visited in an episode with larger temporal difference (TD) error, which is considered as a proxy measure of the amount which the RL agent can learn from an experience. The actual sampling for large TD error is performed in two steps: first, an episode is sampled from the relay buffer according to the average TD error of its experiences, and then, for the sampled episode, the hindsight goal leading to larger TD error is sampled with higher probability from future visited states. The proposed method combined with Deep Deterministic Policy Gradient (DDPG), an off-policy model-free actor-critic algorithm, accelerates learning significantly faster than that without any prioritization on four challenging simulated robotic manipulation tasks. The empirical results show that HGR uses samples more efficiently than previous methods across all tasks.
翻译:本文建议了一种方法,在克服基于统一抽样的后视重现限制方面,将所谓的 " 超视目标分级 " (HGR)的重播经验列为优先事项,以克服基于统一抽样的 " 超视经验重现 " (HER)的局限性,从而产生后视目标。在时间差较大(TD)错误的情况下,所访问的各州的 " 超常 " 样本的概率较高,这被视为衡量RL代理商从经验中可以学到的数值的代用尺度。对于大型TD错误的实际取样分两步进行:首先,一个插曲是根据其经历的平均TD错误从中从中从中缓存中取样,然后,对于抽样的 " 超常识重现 " (HER)目标,从未来被访问的州中取样,导致较大TD错误的 " 后视重置目标 " 。拟议方法与 " 深度确定性政策梯度 " (DDGGPG)(DPG)(DPG)(D)(DPG) " )结合,一种脱离政策模式的无行为者-crictal 算算算法,大大加快了学习速度,比在四项具有挑战的模拟机器人操纵操作任务上没有确定优先次序的四种操作任务上的速度要快快得多。经验结果显示所有任务中比以往方法使用比以往方法的效率要快得多。实验结果。