Standard imitation learning usually assumes that demonstrations are drawn from an optimal policy distribution. However, in the real world, where every human demonstration may exhibit nearly random behavior, the cost of collecting high-quality human datasets can be quite costly. This requires robots to be able to learn from imperfect demonstrations and thus acquire behavioral policy that align human intent. Prior work uses confidence scores to extract useful information from imperfect demonstrations, which relies on access to ground truth rewards or active human supervision. In this paper, we propose a dynamics-based method to obtain fine-grained confidence scores for data without the above efforts. We develop a generalized confidence-based imitation learning framework called Confidence-based Inverse soft-Q Learning (CIQL), which can employ different policy learning methods by changing object functions. Experimental results show that our confidence evaluation method can increase the success rate of the original algorithm by $40.3\%$, which is $13.5\%$ higher than the method of just filtering noise.
翻译:暂无翻译