Generative artificial intelligence poses new challenges around assessment and academic integrity, increasingly driving introductory programming educators to employ invigilated exams often conducted in-person on pencil-and-paper. But the structure of exams often fails to accommodate authentic programming experiences that involve planning, implementing, and debugging programs with computer interaction. In this experience report, we describe code interviews: a more authentic assessment method for take-home programming assignments. Through action research, we experimented with varying the number and type of questions as well as whether interviews were conducted individually or with groups of students. To scale the program, we converted most of our weekly teaching assistant (TA) sections to conduct code interviews on 5 major weekly take-home programming assignments. By triangulating data from 5 sources, we identified 4 themes. Code interviews (1) pushed students to discuss their work, motivating more nuanced but sometimes repetitive insights; (2) enabled peer learning, reducing stress in some ways but increasing stress in other ways; (3) scaled with TA-led sections, replacing familiar practice with an unfamiliar assessment; (4) focused on student contributions, limiting opportunities for TAs to give guidance and feedback. We conclude by discussing the different decisions about the design of code interviews with implications for student experience, academic integrity, and teaching workload.
翻译:暂无翻译