Automatic packing of objects is a critical component for efficient shipping in the Industry 4.0 era. Although robots have shown great success in pick-and-place operations with rigid products, the autonomous shaping and packing of elastic materials into compact boxes remains one of the most challenging problems in robotics; The automation of packing tasks is crucial at this moment given the accelerating shift towards e-commerce (which requires to manipulate multiple types of materials). In this paper, we propose a new action planning approach to automatically pack long linear elastic objects into common-size boxes with a bimanual robotic system. For that, we developed an efficient vision-based method to compute the objects' geometry and track its deformation in real-time and without special markers; The algorithm filters and orders the feedback point cloud that is captured by a depth sensor. A reference object model is introduced to plan the manipulation targets and to complete occluded parts of the object. Action primitives are used to construct high-level behaviors, which enable the execution of all packing steps. To validate the proposed theory, we conduct a detailed experimental study with multiple types and lengths of objects and packing boxes. The proposed methodology is original and its demonstrated manipulation capabilities have not (to the best of the authors knowledge) been previously reported in the literature.
翻译:在工业4.0时代,物体自动包装是工业4.0时代有效航运的一个关键组成部分。虽然机器人在使用硬产品进行选取和定位操作方面表现出巨大成功,但将弹性材料自动成成形和包装成压缩盒的自动成形和包装到紧装箱中仍然是机器人中最具挑战性的问题之一; 包装任务自动化目前至关重要; 鉴于加速转向电子商务(需要操纵多种材料的多种材料类型),包装任务自动化目前至关重要; 在本文件中,我们提出一个新的行动规划方法,将长长线弹性弹性的长长长长的弹性物体自动包装成带有双人造机器人系统的通用箱中,自动将长长长线性弹性物体自动包成一个新的行动规划方法。 为此,我们开发了一种高效的基于视觉的高效方法,以实时和无特殊标记的方式计算物体的几类和地点,跟踪物体的变形,跟踪其变形跟踪其实时和无特殊标记的变形; 算过滤器和订购由深度传感器捕获的反馈点云云; 引入了一个参考物体模型模型,用以构建高层次行为,使所有包装箱中的最佳作者了解其原始能力。