Abstraction has been widely studied as a way to improve the efficiency and generalization of reinforcement learning algorithms. In this paper, we study abstraction in the continuous-control setting. We extend the definition of MDP homomorphisms to encompass continuous actions in continuous state spaces. We derive a policy gradient theorem on the abstract MDP, which allows us to leverage approximate symmetries of the environment for policy optimization. Based on this theorem, we propose an actor-critic algorithm that is able to learn the policy and the MDP homomorphism map simultaneously, using the lax bisimulation metric. We demonstrate the effectiveness of our method on benchmark tasks in the DeepMind Control Suite. Our method's ability to utilize MDP homomorphisms for representation learning leads to improved performance when learning from pixel observations.
翻译:作为提高强化学习算法的效率和一般化的一种方法,我们广泛研究了抽象性,作为提高强化学习算法的效率和一般化的一种方法。在本文中,我们研究了连续控制环境中的抽象性。我们扩大了MDP同质性定义的范围,以包括在连续状态空间的持续行动。我们在抽象的MDP上得出了一个政策梯度理论,使我们能够利用环境的近似对称来优化政策。根据这个理论,我们建议了一种能够同时学习政策和MDP同质性图的行为者――批评性算法,使用宽松的平衡性衡量标准。我们展示了我们在深海控制套件中的基准任务方法的有效性。我们利用MDP同质性学习方法的能力在学习像素观察时提高了业绩。