Knowledge Tracing (KT) is a research field that aims to estimate a student's knowledge state through learning interactions-a crucial component of Intelligent Tutoring Systems (ITSs). Despite significant advancements, no current KT models excel in both predictive accuracy and interpretability. Meanwhile, Large Language Models (LLMs), pre-trained on vast natural language datasets, have emerged as powerful tools with immense potential in various educational applications. This systematic review explores the intersections, opportunities, and challenges of combining KT models and LLMs in educational contexts. The review first investigates LLM applications in education, including their adaptability to domain-specific content and ability to support personalized learning. It then examines the development and current state of KT models, from traditional to advanced approaches, aiming to uncover potential challenges that LLMs could mitigate. The core of this review focuses on integrating LLMs with KT, exploring three primary functions: addressing general concerns in KT fields, overcoming specific KT model limitations, and performing as KT models themselves. Our findings reveal that LLMs can be customized for specific educational tasks through tailor-making techniques such as in-context learning and agent-based approaches, effectively managing complex and unbalanced educational data. These models can enhance existing KT models' performance and solve cold-start problems by generating relevant features from question data. However, both current models depend heavily on structured, limited datasets, missing opportunities to use diverse educational data that could offer deeper insights into individual learners and support various educational settings.
翻译:暂无翻译