Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility. However, due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support. Learners may post their feelings of confusion and struggle in the respective MOOC forums, but with the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention. This problem has been studied as a Natural Language Processing (NLP) problem recently, and is known to be challenging, due to the imbalance of the data and the complex nature of the task. In this paper, we explore for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference, as a new solution to assessing the need of instructor interventions for a learner's post. We compare models based on our proposed methods with probabilistic modelling to its baseline non-Bayesian models under similar circumstances, for different cases of applying prediction. The results suggest that Bayesian deep learning offers a critical uncertainty measure that is not supplied by traditional neural networks. This adds more explainability, trust and robustness to AI, which is crucial in education-based applications. Additionally, it can achieve similar or better performance compared to non-probabilistic neural networks, as well as grant lower variance.
翻译:大规模开放在线课程(MOOCs)由于灵活性很大,已成为人们喜欢的电子学习选择,但是,由于学习者人数众多,而且任务性质复杂,因此成为了人们喜欢的电子学习选择。然而,由于学习者及其不同背景,我们不得不提供实时支持。学生们可能在各自的MOOC论坛上表达他们困惑和挣扎的情绪,但是,由于MOOC教官需要大量职位和繁重的工作量,教员们不大可能辨别所有需要干预的学习者。我们最近把这个问题当作一个自然语言处理(NLP)问题来研究,而且由于数据不平衡和任务性质复杂,这个问题是具有挑战性的。在本论文中,我们第一次探索巴耶斯人对基于学习者的文字文章深层学习,用两种方法:蒙特卡洛辍学和变异推论,作为评估教员对学习者职位的干预需求的新解决办法。我们比较了基于我们拟议方法的模型,与类似情况下的非巴耶斯语种基本模型比较起来,因为不同的预测情况,结果显示,贝耶斯深刻的学习提供了一种至关重要的不稳定性衡量标准,而不是以传统性网络来更可靠地解释。