Imitation learning seeks to circumvent the difficulty in designing proper reward functions for training agents by utilizing expert behavior. With environments modeled as Markov Decision Processes (MDP), most of the existing imitation algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitation policy is to be learned. In this paper, we study the problem of how to imitate tasks when there exist discrepancies between the expert and agent MDP. These discrepancies across domains could include differing dynamics, viewpoint, or morphology; we present a novel framework to learn correspondences across such domains. Importantly, in contrast to prior works, we use unpaired and unaligned trajectories containing only states in the expert domain, to learn this correspondence. We utilize a cycle-consistency constraint on both the state space and a domain agnostic latent space to do this. In addition, we enforce consistency on the temporal position of states via a normalized position estimator function, to align the trajectories across the two domains. Once this correspondence is found, we can directly transfer the demonstrations on one domain to the other and use it for imitation. Experiments across a wide variety of challenging domains demonstrate the efficacy of our approach.
翻译:光学学习试图通过利用专家行为来避免设计培训人员的适当奖赏功能的困难。 以Markov 决策程序( MDP)为模型的环境, 现有的多数模仿算法都取决于在新的仿照政策中, 与学习新仿照政策时一样, 在同一 MDP 中, 是否有专家演示。 在本文中, 我们研究当专家和MDP 代理之间存在差异时如何模仿任务的问题。 这些领域之间的差异可能包括不同的动态、 观点或形态; 我们提供了一个新颖的框架来学习这类领域的通信。 重要的是, 与以前的工作不同, 我们使用仅包含专家领域的国家的不匹配和不匹配的轨迹来学习这一函文。 我们使用对州空间和一个域的周期一致性限制来进行这种学习。 此外, 我们通过一个归正的位置测量功能来强制调整各州的时间位置, 以调和两个领域的轨迹。 一旦发现该函文, 我们就可以直接将一个域的演示标本直接转移到另一个域, 挑战其它域的实验功能。