Interpretive scholars generate knowledge from text corpora by manually sampling documents, applying codes, and refining and collating codes into categories until meaningful themes emerge. Given a large corpus, machine learning could help scale this data sampling and analysis, but prior research shows that experts are generally concerned about algorithms potentially disrupting or driving interpretive scholarship. We take a human-centered design approach to addressing concerns around machine-assisted interpretive research to build Scholastic, which incorporates a machine-in-the-loop clustering algorithm to scaffold interpretive text analysis. As a scholar applies codes to documents and refines them, the resulting coding schema serves as structured metadata which constrains hierarchical document and word clusters inferred from the corpus. Interactive visualizations of these clusters can help scholars strategically sample documents further toward insights. Scholastic demonstrates how human-centered algorithm design and visualizations employing familiar metaphors can support inductive and interpretive research methodologies through interactive topic modeling and document clustering.
翻译:解释性学者通过人工抽样文件、应用代码、完善代码和将代码整理成类别,直到有意义的主题出现。在大量数据的基础上,机器学习可以帮助扩大数据取样和分析的规模,但先前的研究显示,专家一般都关心可能扰乱或驱动解释性奖学金的算法。我们采取以人为本的设计方法,解决围绕机器辅助解释研究的关切,以建立Scal,该方法将机器在现场的集群算法纳入脚本解释性文本分析。作为学者,对文件应用代码并加以完善,由此产生的编码系统则作为结构化元数据,限制从该物理中推断出的等级文档和词组。这些组的交互式可视化可以帮助学者从战略上抽样文件进一步深入洞察。学习证明,使用熟悉的隐喻的以人为中心的算法设计和视觉化能够通过交互式主题建模和文件组群,支持感化和解释性研究方法。