Meta-reinforcement learning (RL) methods can meta-train policies that adapt to new tasks with orders of magnitude less data than standard RL, but meta-training itself is costly and time-consuming. If we can meta-train on offline data, then we can reuse the same static dataset, labeled once with rewards for different tasks, to meta-train policies that adapt to a variety of new tasks at meta-test time. Although this capability would make meta-RL a practical tool for real-world use, offline meta-RL presents additional challenges beyond online meta-RL or standard offline RL settings. Meta-RL learns an exploration strategy that collects data for adapting, and also meta-trains a policy that quickly adapts to data from a new task. Since this policy was meta-trained on a fixed, offline dataset, it might behave unpredictably when adapting to data collected by the learned exploration strategy, which differs systematically from the offline data and thus induces distributional shift. We propose a hybrid offline meta-RL algorithm, which uses offline data with rewards to meta-train an adaptive policy, and then collects additional unsupervised online data, without any reward labels to bridge this distribution shift. By not requiring reward labels for online collection, this data can be much cheaper to collect. We compare our method to prior work on offline meta-RL on simulated robot locomotion and manipulation tasks and find that using additional unsupervised online data collection leads to a dramatic improvement in the adaptive capabilities of the meta-trained policies, matching the performance of fully online meta-RL on a range of challenging domains that require generalization to new tasks.
翻译:元加强学习( RL) 方法可以使元加强学习( RL) 政策适应新任务, 其数量比标准 RL 少, 但元培训本身成本高且耗时。 如果我们能在离线数据上进行元培训, 那么我们可以重新使用相同的静态数据集, 标记为对不同任务的奖励, 标记为对不同任务的奖励, 重新用于适应元测试时各种新任务的元培训政策。 虽然这一能力将使元RL成为现实世界使用的一个实用工具, 离线的元调整( RL) 将带来更多的挑战, 离线的Me- RL 或标准离线的离线的下流管理设置。 Met- RL 学习了一种为适应新任务而收集数据而收集数据的探索战略。 由于这一政策在固定、 离线化的数据集上经过了元培训, 当适应由所收集的数据时, 它可能表现得不易。 与离线的离线性操作数据流分析( ) 并由此导致分配能力的变化。 我们提出一个混合的离线的离线的元调整( R), 将使用离线的对在线的内升级( real) 数据分析( R) 数据采集) 政策,, 而不是通过离线分析(通过离线数据采集) 数据采集) 将数据采集) 数据采集到在线数据采集( 收集数据, 需要一种额外的) 高级数据采集) 数据采集到前的自动数据采集的升级到前数据采集( 数据采集) 数据采集的自动数据采集的自动数据采集的自动数据采集的方法方法。