When an autonomous robot learns how to execute actions, it is of interest to know if and when the execution policy can be generalised to variations of the learning scenarios. This can inform the robot about the necessity of additional learning, as using incomplete or unsuitable policies can lead to execution failures. Generalisation is particularly relevant when a robot has to deal with a large variety of objects and in different contexts. In this paper, we propose and analyse a strategy for generalising parameterised execution models of manipulation actions over different objects based on an object ontology. In particular, a robot transfers a known execution model to objects of related classes according to the ontology, but only if there is no other evidence that the model may be unsuitable. This allows using ontological knowledge as prior information that is then refined by the robot's own experiences. We verify our algorithm for two actions - grasping and stowing everyday objects - such that we show that the robot can deduce cases in which an existing policy can generalise to other objects and when additional execution knowledge has to be acquired.
翻译:当自主机器人学会如何执行动作时,人们很想知道执行政策是否和何时可以被概括到不同的学习情景中。这可以让机器人了解额外学习的必要性,因为使用不完善或不合适的政策可能导致执行失败。当机器人必须处理各种各样的物体和在不同情况下,一般化尤其重要。在本文中,我们提出并分析一项战略,以便根据对象本体学对不同对象的操纵操作操作操作的参数化执行模式进行总体化。特别是,机器人根据本体学将已知的执行模型转让给相关类别的物体,但前提是没有其他证据表明该模型可能不合适。这允许将本体知识作为先前的信息,然后由机器人自己的经验加以完善。我们核查我们的两种行动的算法,即掌握和记录日常物体,这样我们就可以推断出一个现有政策可以概括到其他对象的案例,以及当必须获得其他执行知识时。