Thin plastic bags are ubiquitous in retail stores, healthcare, food handling, recycling, homes, and school lunchrooms. They are challenging both for perception (due to specularities and occlusions) and for manipulation (due to the dynamics of their 3D deformable structure). We formulate the task of manipulating common plastic shopping bags with two handles from an unstructured initial state to a state where solid objects can be inserted into the bag for transport. We propose a self-supervised learning framework where a dual-arm robot learns to recognize the handles and rim of plastic bags using UV-fluorescent markings; at execution time, the robot does not use UV markings or UV light. We propose Autonomous Bagging (AutoBag), where the robot uses the learned perception model to open plastic bags through iterative manipulation. We present novel metrics to evaluate the quality of a bag state and new motion primitives for reorienting and opening bags from visual observations. In physical experiments, a YuMi robot using AutoBag is able to open bags and achieve a success rate of 16/30 for inserting at least one item across a variety of initial bag configurations. Supplementary material is available at https://sites.google.com/view/autobag .
翻译:塑料塑料袋在零售商店、医疗保健、食品处理、回收、家庭和学校午餐室中到处是塑料塑料袋,这些塑料袋在零售商店、保健、食品处理、回收、家庭和学校午餐室中到处可见,对感知(由于外观和隔离)和操作(由于3D变形结构的动态)都具有挑战性。我们制定任务,用两把手柄来操纵普通塑料袋,从一个未结构的初始状态到一个可以将固态物体插入运输袋的状态。我们提议了一个自我监督的学习框架,让一个双臂机器人学会使用紫外光线标记识别塑料袋的把手和边缘;在执行时,机器人不使用紫外线标记或紫外线灯。我们提议采用自动勾搭(AutoBag),让机器人使用所学的感知型模型通过迭接式操纵打开塑料袋。我们提出了评估袋状态质量和从视觉观察中调整和打开袋子的新运动型原始结构的新标准。在实际实验中,一个使用AutoBag的YMI机器人能够打开包袋,并在执行时达到16/30的成功率。