Gender stereotypes in introductory programming courses often go unnoticed, yet they can negatively influence young learners' interest and learning, particularly under-represented groups such as girls. Popular tutorials on block-based programming with Scratch may unintentionally reinforce biases through character choices, narrative framing, or activity types. Educators currently lack support in identifying and addressing such bias. With large language models~(LLMs) increasingly used to generate teaching materials, this problem is potentially exacerbated by LLMs trained on biased datasets. However, LLMs also offer an opportunity to address this issue. In this paper, we explore the use of LLMs for automatically identifying gender-stereotypical elements in Scratch tutorials, thus offering feedback on how to improve teaching content. We develop a framework for assessing gender bias considering characters, content, instructions, and programming concepts. Analogous to how code analysis tools provide feedback on code in terms of code smells, we operationalise this framework using an automated tool chain that identifies *gender stereotype smells*. Evaluation on 73 popular Scratch tutorials from leading educational platforms demonstrates that stereotype smells are common in practice. LLMs are not effective at detecting them, but our gender bias evaluation framework can guide LLMs in generating tutorials with fewer stereotype smells.
翻译:暂无翻译