In the present paper, we propose a decoder-free extension of Dreamer, a leading model-based reinforcement learning (MBRL) method from pixels. Dreamer is a sample- and cost-efficient solution to robot learning, as it is used to train latent state-space models based on a variational autoencoder and to conduct policy optimization by latent trajectory imagination. However, this autoencoding based approach often causes object vanishing, in which the autoencoder fails to perceives key objects for solving control tasks, and thus significantly limiting Dreamer's potential. This work aims to relieve this Dreamer's bottleneck and enhance its performance by means of removing the decoder. For this purpose, we firstly derive a likelihood-free and InfoMax objective of contrastive learning from the evidence lower bound of Dreamer. Secondly, we incorporate two components, (i) independent linear dynamics and (ii) the random crop data augmentation, to the learning scheme so as to improve the training performance. In comparison to Dreamer and other recent model-free reinforcement learning methods, our newly devised Dreamer with InfoMax and without generative decoder (Dreaming) achieves the best scores on 5 difficult simulated robotics tasks, in which Dreamer suffers from object vanishing.
翻译:在本文中,我们提出“Dreamer”的无解码扩展,这是一个主要模型强化学习方法(MBRL),来自像素。“Dreamer”是机器人学习的样本和成本效率高的解决方案,因为它用于在变式自动编码器的基础上培训潜在的州空间模型,并通过潜型轨迹想象力进行政策优化。然而,基于自动编码的方法往往导致物体消失,自动编码器无法看到解决控制任务的关键物体,从而大大限制了Dreamer的潜力。这项工作的目的是通过去除解码器来缓解这个Dreamer的瓶颈,提高它的性能。为此,我们首先从Dreamer较低的证据中得出一个无概率和信息-空间模型学习的目标。第二,我们将两个组成部分(一)独立的线性动态和(二)随机作物数据增强,以便改进培训绩效。在与Dreamer和其他最近的无模型强化学习方法相比,我们新设计的Dreamer用InfMax和不发生基因变形的Megreal decoder任务中,我们设计的Dregrestial decommodal decomdual dal daldald dald dalder)在最难的5个任务上取得了了。