Recent work has shown that augmenting environments with language descriptions improves policy learning. However, for environments with complex language abstractions, learning how to ground language to observations is difficult due to sparse, delayed rewards. We propose Language Dynamics Distillation (LDD), which pretrains a model to predict environment dynamics given demonstrations with language descriptions, and then fine-tunes these language-aware pretrained representations via reinforcement learning (RL). In this way, the model is trained to both maximize expected reward and retain knowledge about how language relates to environment dynamics. On SILG, a benchmark of five tasks with language descriptions that evaluate distinct generalization challenges on unseen environments (NetHack, ALFWorld, RTFM, Messenger, and Touchdown), LDD outperforms tabula-rasa RL, VAE pretraining, and methods that learn from unlabeled demonstrations in inverse RL and reward shaping with pretrained experts. In our analyses, we show that language descriptions in demonstrations improve sample-efficiency and generalization across environments, and that dynamics modelling with expert demonstrations is more effective than with non-experts.
翻译:最近的工作表明,通过语言描述增强环境,可以改善政策学习;然而,对于具有复杂的语言抽象环境,学习如何将语言用于观察是困难的,因为奖励很少,拖延了。我们提议语言动态蒸馏(LDD)(LDD)(LDD)(LDD)(LDD)(LDD)(LDD)(LDD)(LDD)(通过语言描述演示来预测环境动态的模型),然后通过强化学习(RL)(RL)(微调)(微调)(微调)(SLLL)(LLL)(S)(SL)(SILG)(SILG)(五项任务的基准,包括语言描述,评估对隐蔽环境(NetHack、ALFWorld、RTFM、SM、Smail和Tachdown)(Traula-rasa RL、VAE)(VAE)(VAE)(VAE)(TAE)(L)(Set-L)(L)(VAVAD)(L)(L)(L)(L)(L)(L)(PAR)(PA)(PA)(L)(L)(PAE)(PL)(PAD)(L)(P)(P)(P)(S(S)(P)(P)(P)(S(L)(S(P)(P)(P)(L)(L)(L)(L)(P)(P)(P)(L)(L)(P)(P)(P)(L)(P)(P)(P)(L))(L)(P)(L)(L)(L)(L)(L)(L)(P)(L(L)(L)(L)(L)(L)(L)(L)(L)(L)(L)((((((P))(P))))(L))(L)(L)(L)(L)(L)(L)(L)(L)(L)(L)((