It is important for researchers to understand precisely how data scientists turn raw data into insights, including typical programming patterns, workflow, and methodology. This paper contributes a novel system, called DataInquirer, that tracks incremental code executions in Jupyter notebooks (a type of computational notebook). The system allows us to quantitatively measure timing, workflow, and operation frequency in data science tasks without resorting to human annotation or interview. In a series of pilot studies, we collect 97 traces, logging data scientist activities across four studies. While this paper presents a general system and data analysis approach, we focus on a foundational sub-question in our pilot studies: How consistent are different data scientists in analyzing the same data? We taxonomize variation between data scientists on the same dataset according to three categories: semantic, syntactic, and methodological. Our results suggest that there are statistically significant differences in the conclusions reached by different data scientists on the same task and present quantitative evidence for this phenomenon. Furthermore, our results suggest that AI-powered code tools subtly influence these results, allowing student participants to generate workflows that more resemble expert data practitioners.
翻译:暂无翻译