Voice assistants offer a convenient and hands-free way of accessing computing in the home, but a key problem with speech as an interaction modality is how to scaffold accurate mental models of voice assistants, a task complicated by privacy and security concerns. We present the results of a survey of voice assistant users (n=1314) measuring trust, security, and privacy perceptions of voice assistants with varying levels of online functionality explained in different ways. We then asked participants to re-explain how these voice assistants worked, showing that while privacy explanations relieved privacy concerns, trust concerns were exacerbated by trust explanations. Participants' trust, privacy, and security perceptions also distinguished between first party online functionality from the voice assistant vendor and third party online functionality from other developers, and trust in vendors appeared to operate independently from device explanations. Our findings point to the use of analogies to guide users, targeting trust and privacy concerns, key improvements required from manufacturers, and implications for competition in the sector.
翻译:语音助理为在家里使用计算机提供了方便和无手可得的方式,但作为互动方式的言论的一个关键问题是,如何将声音助理的准确心理模式视为精密的心理模型,这是个因隐私和安全考虑而复杂的任务。我们介绍了对语音助理用户进行的一项调查的结果(n=1314),其中测量了信任、安全和隐私感,这些声音助理具有不同程度的在线功能,以不同的方式解释了这些功能。我们接着请与会者重新解释这些语音助理是如何工作的,这表明,虽然隐私解释可以消除隐私关切,但信任关切因信任解释而加剧。与会者的信任、隐私和安全观念也区分了第一方在线功能与语音助理供应商之间的信任、与其他开发商的第三方在线功能之间的区分,对供应商的信任似乎独立于设备解释之外独立运作。我们的调查结果指出,使用模拟来指导用户,针对信任和隐私的关切、制造商要求的关键改进以及该部门竞争的影响。