When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
翻译:当机器人学会使用直接以原始状态作为输入的高度能力模型来学习奖励功能时,他们需要同时学习对任务中重要内容的描述 -- -- 任务“功能” -- -- 以及如何将这些特征合并成一个单一的目标。如果机器人试图同时从旨在教授全额奖励功能的输入中学习奖励功能,那么很容易最终使用包含数据中虚假关联的表达方式,这无法概括到新的设置。相反,我们的最终目标是让机器人能够识别和分离人们在代表国家和行为时实际关心和使用的原因。我们的想法是,我们可以通过询问用户认为相似的行为来调整这个表达方式:如果这些特征相似,则行为就会相似,即使低层次的行为是不同的;相反,如果它们试图同时同时同时进行,行为会变得不同,即使数据的一个特征是不同的,这反过来,这让机器人能够分辨出在代表方式上需要什么,而什么是真实的,以及当他们代表着什么时,我们的行为可以压缩起来,而不是。我们的想法是:如果这些特征相似的特征是相似的特征,那么根据相似的表达方式来学习一个相似的自我对比, 通过一个类似的数据结构, 通过学习一个相似的动作, 学习一个相似的动作,在学习一个相似的动作, 学习一个相似的动作, 通过一个相似的学习一个相似的动作, 通过一个相似的动作, 学习一个相似的动作, 通过一个相似的动作, 学习一个相似的动作, 学习一个相似的动作, 通过一个相似的动作, 学习一个相似的动作, 通过一个相似的动作, 学习一个相似的动作, 通过一个相似的动作, 学习一个相似的, 学习, 的, 学习一个相似的, 通过一个相似的动作, 的 的, 通过一个相似的, 通过一个相似的, 学习, 学习一个相似的, 学习一个相似的, 的, 的, 的, 的, 的, 的, 的, 通过一个相似的, 的, 通过一个相似的, 的, 的, 的, 学习的, 的, 学习的, 的, 的, 的, 的, 的, 的, 的, 学习的, 的, 学习的, 的, 学习, 学习的, 的, 学习的, 的, 学习的, 的,