In the past decade, numerous machine learning algorithms have been shown to successfully learn optimal policies to control real robotic systems. However, it is common to encounter failing behaviors as the learning loop progresses. Specifically, in robot applications where failing is undesired but not catastrophic, many algorithms struggle with leveraging data obtained from failures. This is usually caused by (i) the failed experiment ending prematurely, or (ii) the acquired data being scarce or corrupted. Both complicate the design of proper reward functions to penalize failures. In this paper, we propose a framework that addresses those issues. We consider failing behaviors as those that violate a constraint and address the problem of learning with crash constraints, where no data is obtained upon constraint violation. The no-data case is addressed by a novel GP model (GPCR) for the constraint that combines discrete events (failure/success) with continuous observations (only obtained upon success). We demonstrate the effectiveness of our framework on simulated benchmarks and on a real jumping quadruped, where the constraint threshold is unknown a priori. Experimental data is collected, by means of constrained Bayesian optimization, directly on the real robot. Our results outperform manual tuning and GPCR proves useful on estimating the constraint threshold.
翻译:在过去的十年中,许多机算学习算法都证明能够成功地学习最佳政策来控制真正的机器人系统。然而,随着学习圈的进展,遇到失败行为是很常见的。具体地说,在失败不理想但并非灾难性的机器人应用中,许多算法在利用从失败中获取的数据方面挣扎。这通常是以下因素造成的:(一)失败的实验过早结束,或(二)获得的数据稀缺或腐败。两者都使适当的奖励功能的设计更加复杂,以惩罚失败。在本文件中,我们提出了一个解决这些问题的框架。我们把失败行为视为违反约束行为,并解决在崩溃制约下学习的问题,在制约下没有获得数据。无数据案例由新的GP模型(GPCR)处理,该模型将分散的事件(失败/成功)与持续观察(仅在成功时获得)结合在一起。我们展示了我们的模拟基准框架的有效性,以及真正跳跃式框架的有效性,在先前的制约阈值未知的情况下,我们提出了一个框架。我们通过限制Bayes最优化的方法,直接对实际的机器人进行测试。