Automatic analysis of teacher and student interactions could be very important to improve the quality of teaching and student engagement. However, despite some recent progress in utilizing multimodal data for teaching and learning analytics, a thorough analysis of a rich multimodal dataset coming for a complex real learning environment has yet to be done. To bridge this gap, we present a large-scale MUlti-modal Teaching and Learning Analytics (MUTLA) dataset. This dataset includes time-synchronized multimodal data records of students (learning logs, videos, EEG brainwaves) as they work in various subjects from Squirrel AI Learning System (SAIL) to solve problems of varying difficulty levels. The dataset resources include user records from the learner records store of SAIL, brainwave data collected by EEG headset devices, and video data captured by web cameras while students worked in the SAIL products. Our hope is that by analyzing real-world student learning activities, facial expressions, and brainwave patterns, researchers can better predict engagement, which can then be used to improve adaptive learning selection and student learning outcomes. An additional goal is to provide a dataset gathered from real-world educational activities versus those from controlled lab environments to benefit the educational learning community.
翻译:对教师和学生互动进行自动分析,对于提高教学质量和学生参与的质量可能非常重要。然而,尽管最近在利用多式联运数据进行教学和学习分析方面取得了一些进展,但是,尽管最近在利用多式联运数据进行教学和学习分析方面取得了一些进展,但对为复杂真实的学习环境而来的丰富的多式联运数据集的彻底分析尚未完成。为了缩小这一差距,我们提出了大规模Multi现代教学和学习分析分析(MUTLA)数据集(MUTLA),该数据集包括学生在学习和学习分析(MUTLA)产品时,对学生进行时间同步的多式联运数据记录(学习日志、视频、EEEEEG脑波波),因为他们在Squirrel AI学习系统(SAIL)的各个科目中工作,以解决不同程度的困难问题。数据集资源包括SAIL学习记录库的用户记录、EEG头盔设备收集的脑波数据以及学生在学习和学习萨利尔产品时由网络摄像机采集的视频数据。我们希望通过分析真实世界的学生学习活动、面表和脑波模式,研究人员可以更好地预测参与,然后用来改进适应学习选择和学生学习结果。另一个目标是从实验室学习环境到学习活动。另一个目标是从实际环境学习。从实验室学习活动到学习。