In this paper, we propose a novel framework for tactile-based dexterous manipulation learning with a blind anthropomorphic robotic hand, i.e. without visual sensing. First, object-related states were extracted from the raw tactile signals by a graph-based perception model - TacGNN. The resulting tactile features were then utilized in the policy learning of an in-hand manipulation task in the second stage. This method was examined by a Baoding ball task - simultaneously manipulating two spheres around each other by 180 degrees in hand. We conducted experiments on object states prediction and in-hand manipulation using a reinforcement learning algorithm (PPO). Results show that TacGNN is effective in predicting object-related states during manipulation by decreasing the RMSE of prediction to 0.096cm comparing to other methods, such as MLP, CNN, and GCN. Finally, the robot hand could finish an in-hand manipulation task solely relying on the robotic own perception - tactile sensing and proprioception. In addition, our methods are tested on three tasks with different difficulty levels and transferred to the real robot without further training.
翻译:本文提出了一种用盲人类手部机器人进行基于触觉的巧妙操控学习的新框架。首先,通过基于图的感知模型TacGNN从原始触觉信号中提取与物体相关的状态。然后,在第二个阶段中利用得到的触觉特征进行手中操作任务的策略学习。该方法通过包括双足球在内的宝钢球任务进行了检验。我们使用强化学习算法(PPO)在物体状态预测和手中操作上进行了实验。结果表明,TacGNN在预测操纵过程中的与物体相关的状态方面是有效的,通过将预测的均方根误差(RMSE)降低到0.096cm而与其他方法,例如MLP,CNN和GCN进行比较。最后,机器人手能够仅依靠机器人自身的感知-触觉感知和运动感知完成手中操作任务。此外,我们的方法已在三个不同难度级别的任务中进行了测试,并已转移到实际机器人中而无需进一步培训。