Educational technologies nowadays increasingly use data and Machine Learning (ML) models. This gives the students, instructors, and administrators support and insights for the optimum policy. However, it is well acknowledged that ML models are subject to bias, which raises concern about the fairness, bias, and discrimination of using these automated ML algorithms in education and its unintended and unforeseen negative consequences. The contribution of bias during the decision-making comes from datasets used for training ML models and the model architecture. This paper presents a preliminary investigation of fairness constraint in transformer neural networks on Law School and Student-Mathematics datasets. The used transformer models transform these raw datasets into a richer representation space of natural language processing (NLP) while solving fairness classification. We have employed fairness metrics for evaluation and check the trade-off between fairness and accuracy. We have reported the various metrics of F1, SPD, EOD, and accuracy for different architectures from the transformer model class.
翻译:目前,教育技术越来越多地使用数据和机器学习模式。这为学生、教官和行政人员提供了最佳政策的支持和见解。然而,人们公认,ML模式存在偏见,这引起了人们对教育中使用这些自动ML算法的公平、偏见和歧视及其意外和意外的负面后果的关切。在决策过程中,偏见的推波助澜来自用于培训ML模型和模型结构的数据集。本文件对法学院和学生数学数据集的变压神经网络中的公平制约因素进行了初步调查。用过的变压器模型将这些原始数据集转化为更丰富的自然语言处理(NLP)代表空间,同时解决了公平分类问题。我们采用了公平评价标准,检查了公平和准确性之间的权衡。我们报告了F1、SPD、EOD和变压模型中不同结构的准确性。