Data-driven predictive models are increasingly used in education to support students, instructors, and administrators. However, there are concerns about the fairness of the predictions and uses of these algorithmic systems. In this introduction to algorithmic fairness in education, we draw parallels to prior literature on educational access, bias, and discrimination, and we examine core components of algorithmic systems (measurement, model learning, and action) to identify sources of bias and discrimination in the process of developing and deploying these systems. Statistical, similarity-based, and causal notions of fairness are reviewed and contrasted in the way they apply in educational contexts. Recommendations for policy makers and developers of educational technology offer guidance for how to promote algorithmic fairness in education.
翻译:在教育中越来越多地使用数据驱动的预测模型来支持学生、教官和行政人员,然而,人们对这些算法系统的预测和使用是否公平表示关切。在介绍教育中算法的公平性时,我们与以前关于教育机会、偏见和歧视的文献平行,我们研究算法系统的核心组成部分(计量、示范学习和行动),以确定这些系统的开发和部署过程中的偏见和歧视的根源。统计、基于相似性和因果关系的公平概念在教育中应用的方式受到审查和对比。对决策者和教育技术开发者的建议为如何促进教育中的算法公平提供了指导。