Manual grading of programming assignments in introductory computer science courses can be time-consuming and prone to inconsistencies. While unit testing is commonly used for automatic evaluation, it typically follows a binary pass/fail model and does not give partial marks. Recent advances in large language models (LLMs) offer the potential for automated, scalable, and more objective grading. This paper compares two AI-based grading techniques: \textit{Direct}, where the AI model applies a rubric directly to student code, and \textit{Reverse} (a newly proposed approach), where the AI first fixes errors, then deduces a grade based on the nature and number of fixes. Each method was evaluated on both the instructor's original grading scale and a tenfold expanded scale to assess the impact of range on AI grading accuracy. To assess their effectiveness, AI-assigned scores were evaluated against human tutor evaluations on a range of coding problems and error types. Initial findings suggest that while the Direct approach is faster and straightforward, the Reverse technique often provides a more fine-grained assessment by focusing on correction effort. Both methods require careful prompt engineering, particularly for allocating partial credit and handling logic errors. To further test consistency, we also used synthetic student code generated using Gemini Flash 2.0, which allowed us to evaluate AI graders on a wider range of controlled error types and difficulty levels. We discuss the strengths and limitations of each approach, practical considerations for prompt design, and future directions for hybrid human-AI grading systems that aim to improve consistency, efficiency, and fairness in CS courses.
翻译:在计算机科学入门课程中,手动批改编程作业耗时且易出现评分不一致的问题。虽然单元测试常用于自动评估,但通常采用二元通过/失败模式,无法给予部分分数。大型语言模型(LLMs)的最新进展为实现自动化、可扩展且更客观的评分提供了可能。本文比较了两种基于AI的评分技术:\\textit{直接法}(AI模型直接依据评分标准对学生代码进行评估)与\\textit{逆向法}(新提出的方法,即AI先修正代码错误,再根据修正的性质和数量推断分数)。每种方法均在教师原始评分标准和十倍扩展的评分标准上进行评估,以考察评分范围对AI评分准确性的影响。为评估其有效性,将AI评分结果与人类助教针对一系列编程问题和错误类型的评估进行对比。初步结果表明:直接法虽更快速直接,但逆向法通过聚焦修正工作量,常能提供更细粒度的评估。两种方法均需精心设计提示词,尤其在分配部分分数和处理逻辑错误时。为进一步测试一致性,我们还使用Gemini Flash 2.0生成的合成学生代码进行评估,从而能在更广泛的受控错误类型和难度级别上测试AI评分器。本文讨论了两种方法的优势与局限、提示词设计的实践考量,以及未来人机协同评分系统的发展方向——旨在提升计算机科学课程评分的一致性、效率与公平性。