We propose to learn to generate grasping motion for manipulation with a dexterous hand using implicit functions. With continuous time inputs, the model can generate a continuous and smooth grasping plan. We name the proposed model Continuous Grasping Function (CGF). CGF is learned via generative modeling with a Conditional Variational Autoencoder using 3D human demonstrations. We will first convert the large-scale human-object interaction trajectories to robot demonstrations via motion retargeting, and then use these demonstrations to train CGF. During inference, we perform sampling with CGF to generate different grasping plans in the simulator and select the successful ones to transfer to the real robot. By training on diverse human data, our CGF allows generalization to manipulate multiple objects. Compared to previous planning algorithms, CGF is more efficient and achieves significant improvement on success rate when transferred to grasping with the real Allegro Hand. Our project page is at https://jianglongye.com/cgf .
翻译:我们建议学习如何用隐含功能的巧妙手来生成操纵运动。 通过不断的时间投入,模型可以产生连续和顺利的捕捉计划。 我们命名了拟议的模型“连续采集功能 ” ( CGF) 。 CGF 是通过3D 人类演示使用条件变异自动编码的基因模型学习的。 我们将首先通过运动重新定位将大型人体物体互动轨迹转换为机器人演示, 然后用这些演示来训练 CGF 。 在推断过程中, 我们用 CGF 进行取样, 在模拟器中生成不同的捕捉计划, 并选择成功的计划转移到真正的机器人 。 通过对多种人类数据的培训, 我们的CGF 允许对多个物体进行一般化操作 。 与以前的规划算法相比, CGF 效率更高, 在与真正的 Allegro Han 一起转移时成功率显著提高。 我们的项目网页在 https://jianglongye.com/cgf 上 。