In 2018 the European Commission highlighted the demand of a human-centered approach to AI. Such a claim is gaining even more relevance considering technologies specifically designed to directly interact and physically collaborate with human users in the real world. This is notably the case of social robots. The domain of Human-Robot Interaction (HRI) emerged to investigate these issues. "Human-robot trust" has been highlighted as one of the most challenging and intriguing factors influencing HRI. On the one hand, user studies and technical experts underline how trust is a key element to facilitate users' acceptance, consequently increasing the chances to pursue the given task. On the other hand, such a phenomenon raises also ethical and philosophical concerns leading scholars in these domains to argue that humans should not trust robots. However, trust in HRI is not an index of fragility, it is rooted in anthropomorphism, and it is a natural characteristic of every human being. Thus, instead of focusing solely on how to inspire user trust in social robots, this paper argues that what should be investigated is to what extent and for which purpose it is suitable to trust robots. Such an endeavour requires an interdisciplinary approach taking into account (i) technical needs and (ii) psychological implications.
翻译:2018年,欧盟委员会强调了以人为中心的方法对待AI的要求。考虑到专门设计技术,专门设计用于与现实世界中的人类用户直接互动和身体合作的技术,这一主张越来越具有相关性。这尤其是社会机器人的情况。人类机器人互动领域(HRI)出现来调查这些问题。“人类机器人信任”被强调为影响HRI的最具有挑战性和最诱人的因素之一。一方面,用户研究和技术专家强调信任如何是促进用户接受的关键要素,从而增加完成既定任务的机会。另一方面,这种现象也引起了这些领域的主要学者的伦理和哲学关切,认为人类不应该信任机器人。然而,对人权的信任不是脆弱性指数,而是人类形态学的根基,是每个人的自然特征。因此,本文不仅注重如何激发用户对社会机器人的信任,而且认为应该调查什么是适合信任机器人的程度和目的。这种努力需要从跨学科的角度考虑(i)需要(i)心理影响(i)和(i)心理影响。