Designing suitable tasks for visualization evaluation remains challenging. Traditional evaluation techniques commonly rely on 'low-level' or 'open-ended' tasks to assess the efficacy of a proposed visualization, however, nontrivial trade-offs exist between the two. Low-level tasks allow for robust quantitative evaluations, but are not indicative of the complex usage of a visualization. Open-ended tasks, while excellent for insight-based evaluations, are typically unstructured and require time-consuming interviews. Bridging this gap, we propose inferential tasks: a complementary task category based on inferential learning in psychology. Inferential tasks produce quantitative evaluation data in which users are prompted to form and validate their own findings with a visualization. We demonstrate the use of inferential tasks through a validation experiment on two well-known visualization tools.
翻译:设计可视化评价的适当任务仍然具有挑战性。传统评价技术通常依靠“低层次”或“无限制”的任务来评估拟议的可视化的效果,然而,两者之间存在着非三重权衡。低层次的任务允许进行稳健的定量评价,但并不表明对可视化的复杂使用。虽然不限名额的任务对于深入评估来说是出色的,但通常没有结构,需要花费时间的访谈。缩小这一差距,我们建议了推论性的任务:基于心理学的推断性学习的互补任务类别。推论性的任务产生定量评价数据,促使用户以可视化的方式形成并验证自己的研究结果。我们通过对两个众所周知的可视化工具进行验证试验,展示了使用推断任务的情况。