Driven by ongoing improvements in machine learning, chatbots have increasingly grown from experimental interface prototypes to reliable and robust tools for process automation. Building on these advances, companies have identified various application scenarios, where the automated processing of human language can help foster task efficiency. To this end, the use of chatbots may not only decrease costs, but it is also said to boost user satisfaction. People's intention to use and/or reuse said technology, however, is often dependent on less utilitarian factors. Particularly trust and respective task satisfaction count as relevant usage predictors. In this paper, we thus present work that aims to shed some light on these two variable constructs. We report on an experimental study ($n=277$), investigating four different human-chatbot interaction tasks. After each task, participants were asked to complete survey items on perceived trust, perceived task complexity and perceived task satisfaction. Results show that task complexity impacts negatively on both trust and satisfaction. To this end, higher complexity was associated particularly with those conversations that relied on broad, descriptive chatbot answers, while conversations that span over several short steps were perceived less complex, even when the overall conversation was eventually longer.
翻译:在机器学习不断改进的驱动下,聊天室从实验界面原型逐渐增长到可靠和健全的程序自动化工具。在这些进步的基础上,各公司确定了各种应用方案,在这些应用方案的基础上,人类语言的自动化处理可以帮助提高任务效率。为此目的,使用聊天室不仅可以降低成本,而且还可以提高用户的满意度。但是,人们使用和/或再利用技术的意图往往取决于实用性较低的因素。特别是信任和各自任务满意度被算作相关的使用预测。在本文件中,我们介绍了旨在阐明这两个变量结构的一些亮点的工作。我们报告了一项实验研究(=277美元),调查了四种不同的人类-聊天室互动任务。每次任务之后,都要求参与者完成关于所感知的信任、感知的任务复杂性和感知任务满意度的调查项目。结果显示,任务复杂性对信任和满意度都有负面影响。为此,尤其与那些依赖广泛、描述性聊天室的答案的对话有关,而跨越几个短步骤的对话被认为不太复杂,即使整个对话最终比较复杂。