The complexity of exploratory data analysis poses significant challenges for collaboration and effective communication of analytic workflows. Automated methods can alleviate these challenges by summarizing workflows into more interpretable segments, but designing effective provenance-summarization algorithms depends on understanding the factors that guide how humans segment their analysis. To address this, we conducted an empirical study that explores how users naturally present, communicate, and summarize visual data analysis activities. Our qualitative analysis uncovers key patterns and high-level categories that inform users' decisions when segmenting analytic workflows, revealing the nuanced interplay between data-driven actions and strategic thinking. These insights provide a robust empirical foundation for algorithm development and highlight critical factors that must be considered to enhance the design of visual analytics tools. By grounding algorithmic decisions in human behavior, our findings offer valuable contributions to developing more intuitive and practical tools for automated summarization and clear presentation of analytic provenance.
翻译:暂无翻译