Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, supporting people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their motion intention and comprehending how they "think" about their actions. Moreover, other information sources often occupy human visual and audio modalities, rendering them frequently unsuitable for transmitting such information. We work on a solution that communicates cobot intention via haptic feedback to tackle this challenge. In our concept, we map planned motions of the cobot to different haptic patterns to extend the visual intention feedback.
翻译:如今,机器人在越来越多的领域被发现,他们与人类密切合作。通过轻量级材料和安全传感器,这些cobot人在家庭护理中越来越受欢迎,支持有身体缺陷的人的日常生活。然而,当cobot人自主地采取行动时,人类合作者仍然难以理解和预测他们的行为,这对于获得信任和用户的接受至关重要。预测cobot行为的一个重要方面是了解他们的运动意图和理解他们如何“思考”他们的行为。此外,其他信息来源往往占据着人类的视觉和音频模式,使他们常常不适于传播这种信息。我们致力于找到一种解决方案,通过随机反馈传达cobot人的意图来应对这一挑战。在我们的概念中,我们绘制了cobot人向不同的随机模式移动的计划,以扩展视觉意图反馈。