In deep reinforcement learning (RL), data augmentation is widely considered as a tool to induce a set of useful priors about semantic consistency and improve sample efficiency and generalization performance. However, even when the prior is useful for generalization, distilling it to RL agent often interferes with RL training and degenerates sample efficiency. Meanwhile, the agent is forgetful of the prior due to the non-stationary nature of RL. These observations suggest two extreme schedules of distillation: (i) over the entire training; or (ii) only at the end. Hence, we devise a stand-alone network distillation method to inject the consistency prior at any time (even after RL), and a simple yet efficient framework to automatically schedule the distillation. Specifically, the proposed framework first focuses on mastering train environments regardless of generalization by adaptively deciding which {\it or no} augmentation to be used for the training. After this, we add the distillation to extract the remaining benefits for generalization from all the augmentations, which requires no additional new samples. In our experiments, we demonstrate the utility of the proposed framework, in particular, that considers postponing the augmentation to the end of RL training.
翻译:在深入强化学习(RL)中,数据扩增被广泛视为一种工具,可以引导一系列关于语义一致性的有用前科,提高样本效率和一般化性能。然而,即使前一方法对概括化有用,将数据蒸馏到RL代理中往往会干扰RL培训,降低样本效率。与此同时,该代理物会因RL的非静止性质而忘记以前的情况。这些观察显示两种极端的蒸馏时间表:(一) 在整个培训中;或(二) 只在最后。因此,我们设计了一个独立网络蒸馏方法,在任何时间(甚至在RL之后)前注入一致性,以及一个简单而有效的框架,以自动安排蒸馏时间。具体地说,拟议框架首先侧重于掌握火车环境,而不论适应性地决定培训使用哪个 ~it 或 不 增强 。 之后,我们加上了蒸馏方法,以便从所有扩增中提取剩余的好处,不需要额外的样本。在我们的实验中,我们展示了拟议框架的效用,特别是考虑升级到升级的结束。</s>