Robotic manipulation of highly deformable cloth presents a promising opportunity to assist people with several daily tasks, such as washing dishes; folding laundry; or dressing, bathing, and hygiene assistance for individuals with severe motor impairments. In this work, we introduce a formulation that enables a collaborative robot to perform visual haptic reasoning with cloth -- the act of inferring the location and magnitude of applied forces during physical interaction. We present two distinct model representations, trained in physics simulation, that enable haptic reasoning using only visual and robot kinematic observations. We conducted quantitative evaluations of these models in simulation for robot-assisted dressing, bathing, and dish washing tasks, and demonstrate that the trained models can generalize across different tasks with varying interactions, human body sizes, and object shapes. We also present results with a real-world mobile manipulator, which used our simulation-trained models to estimate applied contact forces while performing physically assistive tasks with cloth. Videos can be found at our project webpage.
翻译:机器人操纵高度变形的布料为帮助人们完成许多日常任务提供了极好的机会,例如洗衣盘、折叠洗衣、或为运动严重受损的个人提供穿衣、洗澡和卫生援助。在这项工作中,我们引入了一种配方,使协作机器人能够用布进行视觉便利推理 -- -- 在身体互动中推断应用力量的位置和规模的行为。我们展示了两种不同的模型,经过物理模拟培训,只能使用视觉和机器人运动观察进行机能推理。我们在机器人辅助的布料、洗澡和洗碗任务模拟中对这些模型进行了定量评估,并展示了经过培训的模型能够通过不同互动、人体大小和物体形状,对不同任务进行概括化。我们还用真实世界的移动操纵器展示了结果,该操作器使用我们经过模拟培训的模型来估计应用接触力量,同时用布料进行物理辅助任务。我们的项目网页上可以找到视频。