Despite its importance for assessing the effectiveness of communicating information visually, fine-grained recallability of information visualisations has not been studied quantitatively so far. In this work we propose a question-answering paradigm to study visualisation recallability and present VisRecall -- a novel dataset consisting of 200 visualisations that are annotated with crowd-sourced human (N = 305) recallability scores obtained from 1,000 questions from five question types. Furthermore, we present the first computational method to predict recallability of different visualisation elements, such as the title or specific data values. We report detailed analyses of our method on VisRecall and demonstrate that it outperforms several baselines in overall recallability and FE-, F-, RV-, and U-question recallability. Taken together, our work makes fundamental contributions towards a new generation of methods to assist designers in optimising visualisations.
翻译:尽管对评估视觉信息传播的有效性十分重要,但迄今尚未从数量上研究信息可视化的微微可查可查性,在这项工作中,我们提出一个问答模式,研究可视可查可查性,并展示VisRecall -- -- 一套由200个可视化数据组成的新数据集,由来自五类问题的1,000个问题(N=305)的可查性分数附加说明,从这五类问题中获得了1 000个可查可查性分数。此外,我们提出了第一个计算方法,用以预测不同可视化要素的可查可查性,如标题或特定数据值。我们报告了对维斯可查可查方法的详细分析,并表明它超越了总体可查性和FE、F、RV-和U-问题可查可查性的若干基线。我们的工作共同为新一代方法做出了基本贡献,以协助设计者优化可视化视觉化。