Each year, thousands of people learn new visual categorization tasks -- radiologists learn to recognize tumors, birdwatchers learn to distinguish similar species, and crowd workers learn how to annotate valuable data for applications like autonomous driving. As humans learn, their brain updates the visual features it extracts and attend to, which ultimately informs their final classification decisions. In this work, we propose a novel task of tracing the evolving classification behavior of human learners as they engage in challenging visual classification tasks. We propose models that jointly extract the visual features used by learners as well as predicting the classification functions they utilize. We collect three challenging new datasets from real human learners in order to evaluate the performance of different visual knowledge tracing methods. Our results show that our recurrent models are able to predict the classification behavior of human learners on three challenging medical image and species identification tasks.
翻译:每年,数千人学习新的视觉分类任务 -- -- 放射学家学会识别肿瘤,观鸟员学会区分类似物种,人群工人学会如何为诸如自主驾驶等应用软件说明有价值的数据。随着人类的学习,他们的大脑更新了它所提取和关注的视觉特征,最终为最后的分类决定提供了依据。在这项工作中,我们提议了一项新任务,即追踪人类学习者在从事具有挑战性视觉分类任务时不断变化的分类行为。我们提出了共同提取学习者使用的视觉特征以及预测他们使用的分类功能的模型。我们从真正的人类学习者那里收集了三个挑战性的新数据集,以评价不同视觉知识追踪方法的性能。我们的结果显示,我们的经常性模型能够预测人类学习者在三项挑战性医学形象和物种识别任务上的分类行为。