A significant body of recent research in the field of Learning Analytics has focused on leveraging machine learning approaches for predicting at-risk students in order to initiate timely interventions and thereby elevate retention and completion rates. The overarching feature of the majority of these research studies has been on the science of prediction only. The component of predictive analytics concerned with interpreting the internals of the models and explaining their predictions for individual cases to stakeholders has largely been neglected. Additionally, works that attempt to employ data-driven prescriptive analytics to automatically generate evidence-based remedial advice for at-risk learners are in their infancy. eXplainable AI is a field that has recently emerged providing cutting-edge tools which support transparent predictive analytics and techniques for generating tailored advice for at-risk students. This study proposes a novel framework that unifies both transparent machine learning as well as techniques for enabling prescriptive analytics. This work practically demonstrates the proposed framework using predictive models for identifying at-risk learners of programme non-completion. The study then further demonstrates how predictive modelling can be augmented with prescriptive analytics on two case studies in order to generate human-readable prescriptive feedback for those who are at risk.
翻译:近期在学习分析领域的大量研究侧重于利用机器学习方法来预测有风险的学生,以便及时采取干预措施,从而提高留校率和结业率。这些研究的大多数主要特点是预测科学。与解释模型的内部和向利益攸关方解释其对个别案例的预测有关的预测分析部分在很大程度上被忽略。此外,试图利用数据驱动的指令性分析方法为处于风险的学生自动提出基于证据的补救建议的工作处于初级阶段。可传播的AI是一个最近出现的领域,它提供了尖端工具,支持透明的预测性分析方法和技术,为处于风险的学生提供有针对性的建议。这项研究提出了一个新的框架,既统一了透明的机器学习,又统一了促成规范性分析的技术。这项工作实际展示了使用预测性模型确定处于风险的未完成方案学习者的拟议框架。随后的研究进一步表明,预测性模型如何在两个案例研究上与描述性分析方法相配合,以产生可识别风险的人类可读指令性反馈。