This paper proposes a method for assessing differential item functioning (DIF) in item response theory (IRT) models. The method does not require pre-specification of anchor items, which is its main virtue. It is developed in two main steps, first by showing how DIF can be re-formulated as a problem of outlier detection in IRT-based scaling, then tackling the latter using established methods from robust statistics. The proposal is a redescending M-estimator of IRT scaling parameters that is tuned to flag items with DIF at the desired asymptotic Type I Error rate. One way of quantifying the robustness of the estimator is in terms of its finite sample breakdown point, which is shown to equal to 1/2 (i.e., the estimator remains bounded whenever fewer than 1/2 of the items on an assessment exhibit DIF). This theoretical result is complemented by simulation studies that illustrate the performance of the estimator and its associated test of DIF. The simulation studies show that the proposed method compares favorably to currently available approaches, and a real data example illustrates its application in a research context where pre-specification of anchor items is infeasible. The focus of the paper is the two-parameter logistic model in two independent groups, with extensions to other settings considered in the conclusion.
翻译:暂无翻译