While trust in human-robot interaction is increasingly recognized as necessary for the implementation of social robots, our understanding of regulating trust in human-robot interaction is yet limited. In the current experiment, we evaluated different approaches to trust calibration in human-robot interaction. The within-subject experimental approach utilized five different strategies for trust calibration: proficiency, situation awareness, transparency, trust violation, and trust repair. We implemented these interventions into a within-subject experiment where participants (N=24) teamed up with a social robot and played a collaborative game. The level of trust was measured after each section using the Multi-Dimensional Measure of Trust (MDMT) scale. As expected, the interventions have a significant effect on i) violating and ii) repairing the level of trust throughout the interaction. Consequently, the robot demonstrating situation awareness was perceived as significantly more benevolent than the baseline.
翻译:尽管在人机交互中的信任被越来越认为是实现社会机器人必要的,但我们对于如何调节人机交互中的信任还存在许多不确定性。在当前的实验中,我们评估了不同的人机交互中信任校准方法,采用了五种不同的信任校准方法,包括熟练度、情境感知、透明度、信任违规和信任修复。我们将这些方法应用于一个代表合作的游戏任务,让参与者(N=24)与一个社会机器人合作完成,使用多维度的信任量表(MDMT)来衡量每个阶段的信任水平。如预期的那样,这些方法对信任违规和信任修复产生了显著的影响,因此,展示情景感知能力的机器人被认为比基准机器人表现的更仁慈。