We present a series of two studies conducted to understand user's affective states during voice-based human-machine interactions. Emphasis is placed on the cases of communication errors or failures. In particular, we are interested in understanding "confusion" in relation with other affective states. The studies consist of two types of tasks: (1) related to communication with a voice-based virtual agent: speaking to the machine and understanding what the machine says, (2) non-communication related, problem-solving tasks where the participants solve puzzles and riddles but are asked to verbally explain the answers to the machine. We collected audio-visual data and self-reports of affective states of the participants. We report results of two studies and analysis of the collected data. The first study was analyzed based on the annotator's observation, and the second study was analyzed based on the self-report.
翻译:我们提出一系列两项研究,以了解用户在以声音为基础的人与机器互动过程中的感性状态。重点是沟通错误或失败案例。特别是,我们有兴趣了解与其他感性状态的“融合 ” 。这些研究包括两类任务:(1) 与以声音为基础的虚拟代理进行沟通:与机器交谈并了解机器说什么;(2) 与非沟通相关的解决问题任务,即参与者解答谜题和谜题,但被要求口头解释对机器的答案。我们收集了参与者感性状态的视听数据和自我报告。我们报告了所收集数据的两项研究和分析结果。第一项研究是根据警告员的观察进行分析的,第二项研究则根据自我报告进行分析。