Inferring the intent of an intelligent agent from demonstrations and subsequently predicting its behavior, is a critical task in many collaborative settings. A common approach to solve this problem is the framework of inverse reinforcement learning (IRL), where the observed agent, e.g., a human demonstrator, is assumed to behave according to an intrinsic cost function that reflects its intent and informs its control actions. In this work, we reformulate the IRL inference problem to learning control Lyapunov functions (CLF) from demonstrations by exploiting the inverse optimality property, which states that every CLF is also a meaningful value function. Moreover, the derived CLF formulation directly guarantees stability of inferred control policies. We show the flexibility of our proposed method by learning from goal-directed movement demonstrations in a continuous environment.
翻译:在许多合作环境中,从示威中推断智能剂的意图并随后预测其行为是关键的任务,解决这一问题的共同办法是反强化学习框架,被观察剂,例如人类示范师,假定其行为符合反映其意图并告知其控制行动的内在成本功能。在这项工作中,我们通过利用反最佳性财产,将IRL的推论问题重新表述为学习控制Lyapunov功能(CLF)从示威中学到控制 Lyapunov 功能(CLF)的问题,后者指出,每个CLF都是一种有意义的价值功能。此外,衍生的CLF的提法直接保证了推断的控制政策的稳定。我们从连续环境中的定向运动演示中学习,显示了我们拟议方法的灵活性。