Machine Learning methods, such as those from the Reinforcement Learning (RL) literature, have increasingly been applied to robot control problems. However, such control methods, even when learning environment dynamics (e.g. as in Model-Based RL/control) often remain data-inefficient. Furthermore, the decisions made by learned policies or the estimations made by learned dynamic models, unlike those made by their hand-designed counterparts, are not readily interpretable by a human user without the use of Explainable AI techniques. This has several disadvantages, such as increased difficulty both in debugging and integration in safety-critical systems. On the other hand, in many robotic systems, prior knowledge of environment kinematics and dynamics is at least partially available (e.g. from classical mechanics). Arguably, incorporating such priors to the environment model or decision process can help address the aforementioned problems: it reduces problem complexity and the needs in terms of exploration, while also facilitating the expression of the decisions taken by the agent in terms of physically meaningful entities. Our aim with this paper is to illustrate and support this point of view. We model a payload manipulation problem based on a real robotic system, and show that leveraging prior knowledge about the dynamics of the environment can lead to improved explainability and an increase in both safety and data-efficiency,leading to satisfying generalization properties with less data.
翻译:机械学习方法,如强化学习(RL)文献中的方法,越来越多地应用于机器人控制问题;然而,这种控制方法,即使学习环境动态(如模型型RL/控制)往往仍然缺乏数据效率;此外,与由亲手设计的对应方所作的不同,由学习政策作出的决定或由学习动态模型所作的估计,与由人类用户手工设计的模型所作的不同,不使用可解释的AI技术,就无法轻易地被人类用户解释。这有几个缺点,例如,在安全临界系统中调试和整合的困难增加。另一方面,许多机器人系统至少部分具备以前对环境动力学和动态的了解(如传统的RL/控制系统)。 可以说,将这种先前的环境模型或决策纳入环境模型或决策过程,可有助于解决上述问题:在探索方面减少问题的复杂性和需求,同时便利代理人在实体具有实际意义的实体方面作出决策的表达。我们本文件的目的是说明和支持这一观点。我们以真实的机器人动力操纵问题为模型,其基础是真实的机器人动力学和动力学效率的提高,并显示先前数据的提高。</s>