Offline reinforcement learning leverages large datasets to train policies without interactions with the environment. The learned policies may then be deployed in real-world settings where interactions are costly or dangerous. Current algorithms over-fit to the training dataset and as a consequence perform poorly when deployed to out-of-distribution generalizations of the environment. We aim to address these limitations by learning a Koopman latent representation which allows us to infer symmetries of the system's underlying dynamic. The latter is then utilized to extend the otherwise static offline dataset during training; this constitutes a novel data augmentation framework which reflects the system's dynamic and is thus to be interpreted as an exploration of the environments phase space. To obtain the symmetries we employ Koopman theory in which nonlinear dynamics are represented in terms of a linear operator acting on the space of measurement functions of the system and thus symmetries of the dynamics may be inferred directly. We provide novel theoretical results on the existence and nature of symmetries relevant for control systems such as reinforcement learning settings. Moreover, we empirically evaluate our method on several benchmark offline reinforcement learning tasks and datasets including D4RL, Metaworld and Robosuite and find that by using our framework we consistently improve the state-of-the-art for Q-learning methods.
翻译:离线强化学习利用大型数据集,在不与环境互动的情况下,培训政策,不与环境互动。然后,在现实世界中,可以部署学习的政策,在不与环境互动费用昂贵或危险的情况下,进行新的数据增强框架,反映系统的动态,从而被解释为对环境阶段空间的探索。为了取得对称性,我们采用了Koopman理论,即非线性动态表现为线性操作者在系统测量功能的空间上采取行动,从而可以直接推断动态的对称性,从而解决这些局限性。我们随后在培训期间,将后者用于扩大系统基本离线数据集;这是一个新的数据增强框架,反映系统的动态,从而被解释为对环境阶段空间的探索。为了获得对称性,我们采用了非线性动态表现为线性操作者在系统测量功能空间上采取行动,从而可以直接推断系统基本动态的对称性。我们用新的理论结果来扩大控制系统(如强化学习环境)的对称性和性质。此外,我们用一些基准的离线性强化模型评估了我们的方法,并用模型学习我们不断改进的模型和数据方法来改进模型学习。