Generalizing robot trajectories from human demonstrations to new contexts remains a key challenge in Learning from Demonstration (LfD), particularly when only single-context demonstrations are available. We present a novel Gaussian Mixture Model (GMM)-based approach that enables systematic generalization from single-context demonstrations to a wide range of unseen start and goal configurations. Our method performs component-level reparameterization of the GMM, adapting both mean vectors and covariance matrices, followed by Gaussian Mixture Regression (GMR) to generate smooth trajectories. We evaluate the approach on a dual-arm pick-and-place task with varying box placements, comparing against several baselines. Results show that our method significantly outperforms baselines in trajectory success and fidelity, maintaining accuracy even under combined translational and rotational variations of task configurations. These results demonstrate that our method generalizes effectively while ensuring boundary convergence and preserving the intrinsic structure of demonstrated motions.
翻译:在模仿学习中,将人类演示的机器人轨迹泛化至新上下文环境仍是一个关键挑战,特别是在仅有单上下文演示数据的情况下。本文提出一种基于高斯混合模型的新方法,能够从单上下文演示系统性地泛化至多种未见过的起始与目标配置。该方法通过对高斯混合模型进行分量级重参数化,调整均值向量与协方差矩阵,并利用高斯混合回归生成平滑轨迹。我们在双臂拾放任务中评估了该方法,比较了不同箱子放置位置的多种基线方法。实验结果表明,本方法在轨迹成功率与保真度方面显著优于基线方法,即使在任务配置同时存在平移与旋转变换时仍能保持精度。这些结果证明,本方法在确保边界收敛性和保持演示运动内在结构的同时,实现了有效的泛化能力。