Recent developments in explainable artificial intelligence promise the potential to transform human-robot interaction: Explanations of robot decisions could affect user perceptions, justify their reliability, and increase trust. However, the effects on human perceptions of robots that explain their decisions have not been studied thoroughly. To analyze the effect of explainable robots, we conduct a study in which two simulated robots play a competitive board game. While one robot explains its moves, the other robot only announces them. Providing explanations for its actions was not sufficient to change the perceived competence, intelligence, likeability or safety ratings of the robot. However, the results show that the robot that explains its moves is perceived as more lively and human-like. This study demonstrates the need for and potential of explainable human-robot interaction and the wider assessment of its effects as a novel research direction.
翻译:在可解释的人工智能方面,最近的发展动态预示着有可能改变人类-机器人互动:机器人决定的解释可能影响用户的认知,说明其可靠性,并增加信任度。然而,对解释其决定的机器人对人类认知的影响没有进行彻底研究。为了分析可解释的机器人的影响,我们进行了一项研究,让两个模拟机器人玩一个竞争性板游戏。虽然一个机器人解释其动作,但另一个机器人只宣布其动作。为其行动提供解释不足以改变机器人的认知能力、智能、相似性或安全评级。然而,结果显示,解释其动作的机器人被认为更活跃,更像人类。这项研究表明,需要并有可能进行可解释的人类-机器人互动,并更广泛地评估其影响,以此作为新的研究方向。