In recent years, increasing attention has been directed to leveraging pre-trained vision models for motor control. While existing works mainly emphasize the importance of this pre-training phase, the arguably equally important role played by downstream policy learning during control-specific fine-tuning is often neglected. It thus remains unclear if pre-trained vision models are consistent in their effectiveness under different control policies. To bridge this gap in understanding, we conduct a comprehensive study on 14 pre-trained vision models using 3 distinct classes of policy learning methods, including reinforcement learning (RL), imitation learning through behavior cloning (BC), and imitation learning with a visual reward function (VRF). Our study yields a series of intriguing results, including the discovery that the effectiveness of pre-training is highly dependent on the choice of the downstream policy learning algorithm. We show that conventionally accepted evaluation based on RL methods is highly variable and therefore unreliable, and further advocate for using more robust methods like VRF and BC. To facilitate more universal evaluations of pre-trained models and their policy learning methods in the future, we also release a benchmark of 21 tasks across 3 different environments alongside our work.
翻译:近年来,越来越多的关注点被放在了利用预训练视觉模型来进行运动控制上。虽然现有的工作主要强调了预训练阶段的重要性,但在控制特定的微调过程中下游的策略学习所扮演的同样重要角色常常被忽视。因此,预训练视觉模型在不同的控制策略下的效果是否一致仍不清楚。为了填补这一认知上的空白,本文采用了三种不同的策略学习方法,包括强化学习,行为克隆以及使用视觉奖励函数的模仿学习,综合评估了14种预训练视觉模型。我们得出了一系列有趣的结果,包括发现预训练阶段的效果高度依赖下游策略学习算法的选择。我们还展示了传统的基于强化学习方法的评估是非常不稳定和不可靠的,进一步主张使用更加健壮的方法,如视觉奖励函数和行为克隆。为了在未来便于更普遍地评估预训练模型及其策略学习方法,我们还发布了21个任务在三个不同环境下的基准测试。