This study investigates differential item functioning (DIF) detection in computerized adaptive testing (CAT) using multilevel modeling. We argue that traditional DIF methods have proven ineffective in CAT due to the hierarchical nature of the data. Our proposed two-level model accounts for dependencies between items via provisional ability estimates. Simulations revealed that our model outperformed others in Type-I error control and power, particularly in scenarios with high exposure rates and longer tests. Expanding item pools, incorporating item parameters, and exploring Bayesian estimation are recommended for future research to further enhance DIF detection in CAT. Balancing model complexity with convergence remains a key challenge for robust outcomes.
翻译:暂无翻译