End-to-end reinforcement learning techniques are among the most successful methods for robotic manipulation tasks. However, the training time required to find a good policy capable of solving complex tasks is prohibitively large. Therefore, depending on the computing resources available, it might not be feasible to use such techniques. The use of domain knowledge to decompose manipulation tasks into primitive skills, to be performed in sequence, could reduce the overall complexity of the learning problem, and hence reduce the amount of training required to achieve dexterity. In this paper, we propose the use of Davenport chained rotations to decompose complex 3D rotation goals into a concatenation of a smaller set of more simple rotation skills. State-of-the-art reinforcement-learning-based methods can then be trained using less overall simulated experience. We compare its performance with the popular Hindsight Experience Replay method, trained in an end-to-end fashion using the same amount of experience in a simulated robotic hand environment. Despite a general decrease in performance of the primitive skills when being sequentially executed, we find that decomposing arbitrary 3D rotations into elementary rotations is beneficial when computing resources are limited, obtaining increases of success rates of approximately 10% on the most complex 3D rotations with respect to the success rates obtained by HER trained in an end-to-end fashion, and increases of success rates between 20% and 40% on the most simple rotations.
翻译:端到端强化学习技巧是最成功的机器人操纵任务方法之一。 但是,找到能够解决复杂任务的良好政策所需的培训时间非常之大,令人望而却步。 因此,根据现有的计算资源,使用这种技术可能不可行。 使用域知识将操纵任务分解成原始技能,按顺序进行,可以降低学习问题的总体复杂性,从而减少实现灵活性所需的培训数量。 在本文件中,我们提议使用Davenport 链式轮换,将复杂的三维轮换目标转换成一套更简单的轮换技能。 取决于现有的计算资源,使用这种技术也许不可行。 使用域知识将操纵任务分解成原始技能,按顺序进行,这样可以减少学习问题的总体复杂性,从而降低学习问题的总体复杂性,从而减少实现灵活性所需的培训数量。 尽管按顺序执行时原始技能的绩效普遍下降,但我们发现,将三维制任意轮换转化为更小型的更简单的轮换技能组合。 利用较不那么全面的模拟的经验,再培训后,再培训后,再培训后,再培训后,再培训后,再培训后,再培训成功率将近10 %的成功率提高。