As social robots become increasingly prevalent in day-to-day environments, they will participate in conversations and appropriately manage the information shared with them. However, little is known about how robots might appropriately discern the sensitivity of information, which has major implications for human-robot trust. As a first step to address a part of this issue, we designed a privacy controller, CONFIDANT, for conversational social robots, capable of using contextual metadata (e.g., sentiment, relationships, topic) from conversations to model privacy boundaries. Afterwards, we conducted two crowdsourced user studies. The first study (n=174) focused on whether a variety of human-human interaction scenarios were perceived as either private/sensitive or non-private/non-sensitive. The findings from our first study were used to generate association rules. Our second study (n=95) evaluated the effectiveness and accuracy of the privacy controller in human-robot interaction scenarios by comparing a robot that used our privacy controller against a baseline robot with no privacy controls. Our results demonstrate that the robot with the privacy controller outperforms the robot without the privacy controller in privacy-awareness, trustworthiness, and social-awareness. We conclude that the integration of privacy controllers in authentic human-robot conversations can allow for more trustworthy robots. This initial privacy controller will serve as a foundation for more complex solutions.
翻译:随着社交机器人在日常环境中日益流行,他们将参与对话,并适当管理与他们共享的信息。然而,对于机器人如何适当辨别信息敏感度知之甚少,这对人类机器人信任具有重大影响。作为解决这一问题一部分的第一步,我们设计了一个隐私控制器,即CONFIDANT,用于对话社交机器人,能够使用从对话到模式隐私界限的背景元数据(例如情感、关系、议题),随后,我们进行了两项众源用户研究。第一项研究(n=174)侧重于人类互动情景是否被视为私人/敏感或非私人/非敏感信息。我们第一次研究的结果被用来产生关联规则。我们的第二项研究(n=95)评估了隐私控制器在人类机器人互动情景中的有效性和准确性,将使用隐私控制器的机器人与没有隐私控制的基线机器人相比较。我们的研究结果表明,与隐私控制器的机器人超越了机器人在隐私意识、信任度、以及社会保密性对话方面没有隐私控制器的机能。我们的结论是,在隐私意识、信任性、更可靠的保密性保密性对话方面,我们可以将这一基础的机器人整合作为更复杂的机器人。