Employing skin-like tactile sensors on robots enhances both the safety and usability of collaborative robots by adding the capability to detect human contact. Unfortunately, simple binary tactile sensors alone cannot determine the context of the human contact -- whether it is a deliberate interaction or an unintended collision that requires safety manoeuvres. Many published methods classify discrete interactions using more advanced tactile sensors or by analysing joint torques. Instead, we propose to augment the intention recognition capabilities of simple binary tactile sensors by adding a robot-mounted camera for human posture analysis. Different interaction characteristics, including touch location, human pose, and gaze direction, are used to train a supervised machine learning algorithm to classify whether a touch is intentional or not with 92% accuracy. We demonstrate that multimodal intention recognition is significantly more accurate than monomodal analysis with the collaborative robot Baxter. Furthermore, our method can also continuously monitor interactions that fluidly change between intentional or unintentional by gauging the user's attention through gaze. If a user stops paying attention mid-task, the proposed intention and attention recognition algorithm can activate safety features to prevent unsafe interactions. In addition, the proposed method is robot and touch sensor layout agnostic and is complementary with other methods.
翻译:在机器人身上使用像皮肤的触动感应器,通过增加探测人类接触的能力,提高了协作机器人的安全和可用性。不幸的是,简单的二进制触动感应器单靠简单的二进制触动感应器无法确定人类接触的背景 -- -- 不管是蓄意互动还是非意外碰撞,需要安全操作。许多公布的方法使用更先进的触动感应器或分析联合托盘对离散互动进行分类。相反,我们提议增加一个机器人挂载的相机,用于人类姿态分析,从而增强简单的二进制触动感应器的识别能力。不同的交互特性,包括触摸地点、人姿势和视视方向,被用来训练受监督的机器学习算法,以对触动是否有意或非故意进行分类,进行92%的准确性。我们证明多式联运意图的识别比与协作机器人巴克斯特的单式分析要更准确得多。此外,我们的方法还可以通过凝视来吸引用户的注意力,不断监测有意或无意之间发生流动变化的相互作用。如果用户停止关注中式,拟议的意图和注意力识别算法可以激活安全特性特征,从而防止不安全的相互作用。此外,拟议的方法是机器人和感官的一种补充的方法。