In a partially observable Markov decision process (POMDP), an agent typically uses a representation of the past to approximate the underlying MDP. We propose to utilize a frozen Pretrained Language Transformer (PLT) for history representation and compression to improve sample efficiency. To avoid training of the Transformer, we introduce FrozenHopfield, which automatically associates observations with pretrained token embeddings. To form these associations, a modern Hopfield network stores these token embeddings, which are retrieved by queries that are obtained by a random but fixed projection of observations. Our new method, HELM, enables actor-critic network architectures that contain a pretrained language Transformer for history representation as a memory module. Since a representation of the past need not be learned, HELM is much more sample efficient than competitors. On Minigrid and Procgen environments HELM achieves new state-of-the-art results. Our code is available at https://github.com/ml-jku/helm.
翻译:在部分可见的Markov决策程序中,一个代理机构通常使用过去代表来接近基本MDP。我们提议使用一个冷冻的预先语言变换器(PLT)来代表历史和压缩,以提高样本效率。为了避免对变换器的培训,我们引入了FrozenHopfield, 它自动将观测与预先培训的象征性嵌入器联系起来。为了形成这些协会,现代Hopfield网络存储了这些象征性嵌入器,这些嵌入器通过随机但固定的观察投影查询检索。我们的新方法,即HELM, 使包含预先培训的语言变换器的演员- 网络结构成为历史代表器的记忆模块。由于不需要了解过去的表现, HELM 比竞争者更有效率。在Minigrid和Procgen环境, HELM 实现了新的状态-艺术结果。我们的代码可以在 https://github.com/ml-jku/helm上查阅。