Action and observation delays commonly occur in many Reinforcement Learning applications, such as remote control scenarios. We study the anatomy of randomly delayed environments, and show that partially resampling trajectory fragments in hindsight allows for off-policy multi-step value estimation. We apply this principle to derive Delay-Correcting Actor-Critic (DCAC), an algorithm based on Soft Actor-Critic with significantly better performance in environments with delays. This is shown theoretically and also demonstrated practically on a delay-augmented version of the MuJoCo continuous control benchmark.
翻译:远程控制情景等许多强化学习应用中通常会出现行动和观察延误。我们研究了随机延迟环境的解剖学,并表明事后观察中部分重采弹道碎片可以得出政策外多步值估计。我们运用这一原则得出延迟校正动作-批评(DCAC)算法(DCAC),该算法基于软动作-批评,在有延误的环境中表现显著改善。这在理论上是证明的,而且实际上也表现在延缓的MuJoCo连续控制基准版本上。