Generative AI enables personalized computer science education at scale, yet questions remain about whether such personalization supports or undermines learning. This scoping review synthesizes 32 studies (2023-2025) purposively sampled from 259 records to map personalization mechanisms and effectiveness signals in higher-education computer science contexts. We identify five application domains: intelligent tutoring, personalized materials, formative feedback, AI-augmented assessment, and code review, and analyze how design choices shape learning outcomes. Designs incorporating explanation-first guidance, solution withholding, graduated hint ladders, and artifact grounding (student code, tests, and rubrics) consistently show more positive learning processes than unconstrained chat interfaces. Successful implementations share four patterns: context-aware tutoring anchored in student artifacts, multi-level hint structures requiring reflection, composition with traditional CS infrastructure (autograders and rubrics), and human-in-the-loop quality assurance. We propose an exploration-first adoption framework emphasizing piloting, instrumentation, learning-preserving defaults, and evidence-based scaling. Recurrent risks include academic integrity, privacy, bias and equity, and over-reliance, and we pair these with operational mitigation. The evidence supports generative AI as a mechanism for precision scaffolding when embedded in audit-ready workflows that preserve productive struggle while scaling personalized support.
翻译:生成式人工智能使得大规模个性化计算机科学教育成为可能,但此类个性化究竟促进还是阻碍学习仍存疑问。本范畴综述综合分析了从259条记录中有目的性抽取的32项研究(2023-2025年),旨在描绘高等教育计算机科学情境中的个性化机制与有效性信号。我们识别出五个应用领域:智能导学、个性化材料、形成性反馈、AI增强评估及代码评审,并分析了设计选择如何影响学习成效。采用"解释优先"引导、答案暂缓提供、渐进式提示阶梯以及基于学习成果(学生代码、测试和评分量规)锚定的设计方案,相较于无约束的聊天界面,持续展现出更积极的学习过程。成功实施案例共享四种模式:以学生成果为锚点的情境感知导学、需要反思的多层级提示结构、与传统计算机科学基础设施(自动评分器和评分量规)的融合,以及人在回路的质保机制。我们提出一个强调试点实施、监测工具部署、学习保护型默认设置及循证推广的"探索优先"采纳框架。研究同时识别出学术诚信、隐私保护、偏见与公平性以及过度依赖等循环性风险,并针对性地提出了可操作的缓解策略。证据表明,当生成式人工智能嵌入可审计的工作流程——在扩展个性化支持的同时保留学生的有效探索空间——它能成为实现精准脚手架的有效机制。