Grasping inhomogeneous objects, practical use in real-world applications, remains a challenging task due to the unknown physical properties such as mass distribution and coefficient of friction. In this study, we propose a vision-based meta-learning algorithm to learn physical properties in an agnostic way. In particular, we employ Conditional Neural Processes (CNPs) on top of DexNet-2.0. CNPs learn physical embeddings rapidly from a few observations where each observation is composed of i) the cropped depth image, ii) the grasping height between the gripper and estimated grasping point, and iii) the binary grasping result. Our modified conditional DexNet-2.0 (DexNet-CNP) updates the predicted grasping quality iteratively from new observations, which can be executed in an online fashion. We evaluate our method in the Pybullet simulator using various shape primitive objects with different physical parameters. The results show that our model outperforms the original DexNet-2.0 and is able to generalize on unseen objects with different shapes.
翻译:由于质量分布和摩擦系数等未知的物理特性,在现实世界应用中实际应用的杂交分离物体仍然是一项具有挑战性的任务。在本研究中,我们提出了一个基于视觉的元学习算法,以不可知的方式学习物理属性。特别是,我们在 DexNet-2.0. 上方使用条件神经过程(CNPs ) 。CNPs从一些观测中迅速学习物理嵌入,每次观测由一) 裁剪的深度图像组成,二) 抓取器和估计抓取点之间的抓取高度,三) 二进制抓取结果。我们修改的有条件的DexNet-2.0 (DexNet-CNP) 更新了从新观测中预测的迭接性抓取质量,新观测可以以在线方式进行。我们用不同物理参数的不同形状原始物体对 Pybullet模拟器中的方法进行了评估。结果显示,我们的模型超越了原DexNet-2.0。 并且能够对不同形状的看不见物体进行概括。