In learning from demonstrations, many generative models of trajectories make simplifying assumptions of independence. Correctness is sacrificed in the name of tractability and speed of the learning phase. The ignored dependencies, which often are the kinematic and dynamic constraints of the system, are then only restored when synthesizing the motion, which introduces possibly heavy distortions. In this work, we propose to use those approximate trajectory distributions as close-to-optimal discriminators in the popular generative adversarial framework to stabilize and accelerate the learning procedure. The two problems of adaptability and robustness are addressed with our method. In order to adapt the motions to varying contexts, we propose to use a product of Gaussian policies defined in several parametrized task spaces. Robustness to perturbations and varying dynamics is ensured with the use of stochastic gradient descent and ensemble methods to learn the stochastic dynamics. Two experiments are performed on a 7-DoF manipulator to validate the approach.
翻译:在从演示中学习时,许多轨迹的基因模型简化了独立性的假设。 以学习阶段的可移动性和速度的名义牺牲了正确性。 被忽视的依附性通常是系统的动态和动态限制,只有在将运动合成时才会恢复,这可能造成严重的扭曲。 在这项工作中,我们提议利用这些近似轨迹分布作为流行的基因对抗框架中的近至最佳歧视者,以稳定和加速学习程序。 我们的方法解决了适应性和稳健性这两个问题。 为了适应不同的情况,我们提议使用几个对称任务空间定义的高斯政策产品。通过使用随机梯度梯度梯度下降和共振动方法来保证振动和动力变化。 在一种7-多夫操纵器上进行了两次实验,以验证方法。