AI is promising in assisting UX evaluators with analyzing usability tests, but its judgments are typically presented as non-interactive visualizations. Evaluators may have questions about test recordings, but have no way of asking them. Interactive conversational assistants provide a Q&A dynamic that may improve analysis efficiency and evaluator autonomy. To understand the full range of analysis-related questions, we conducted a Wizard-of-Oz design probe study with 20 participants who interacted with simulated AI assistants via text or voice. We found that participants asked for five categories of information: user actions, user mental model, help from the AI assistant, product and task information, and user demographics. Those who used the text assistant asked more questions, but the question lengths were similar. The text assistant was perceived as significantly more efficient, but both were rated equally in satisfaction and trust. We also provide design considerations for future conversational AI assistants for UX evaluation.
翻译:大赦国际在协助UX评价人员分析可用性测试方面很有希望,但其判断通常以非互动的可视化形式提出。评价人员可能对测试记录有疑问,但无法询问这些记录。交互式对话助理提供了一种能提高分析效率和评价人员自主性的动态。为了了解与分析有关的各类问题,我们与20名通过文字或声音与模拟的AI助理互动的参与者进行了一项设计探测研究。我们发现,参与者要求提供五类信息:用户行动、用户心理模型、AI助理的帮助、产品和任务信息以及用户人口统计。使用文本助理的人提出了更多问题,但问题长度相似。认为文本助理效率大得多,但双方都得到满意和信任的同等评价。我们还为今后的UX评价谈话性AI助理提供了设计考虑。</s>