Many problems in RL, such as meta RL, robust RL, and generalization in RL, can be cast as POMDPs. In theory, simply augmenting model-free RL with memory, such as recurrent neural networks, provides a general approach to solving all types of POMDPs. However, prior work has found that such recurrent model-free RL methods tend to perform worse than more specialized algorithms that are designed for specific types of POMDPs. This paper revisits this claim. We find that careful architecture and hyperparameter decisions yield a recurrent model-free implementation that performs on par with (and occasionally substantially better than) more sophisticated recent techniques in their respective domains. We also release a simple and efficient implementation of recurrent model-free RL for future work to use as a baseline for POMDPs. Code is available at https://github.com/twni2016/pomdp-baselines
翻译:在RL的许多问题,如元RL、强力RL和RL的概括化等,可以作为POMDPs。理论上,只是用记忆来补充无模型RL,例如经常性神经网络,提供了解决所有类型POMDPs的一般方法。然而,先前的工作发现,这种经常性无模型RL方法的运行比为特定类型的POMDPs设计的更差的专门算法。本文件再次讨论这一说法。我们发现,谨慎的架构和超参数决定产生了一种经常性的无模型执行,在它们各自的领域与(有时大大优于)较尖端的最新技术相同地(有时甚至大大优于)执行。我们还发布了一个简单而有效的经常性无模型RL,供今后工作用作POMDPs的基准。代码可在https://github.com/twni2016/pomdp-baselines查阅。