Student Growth Percentiles (SGPs), widely adopted across U.S. state assessment systems, employ independent quantile regression followed by post-hoc correction using an isotonic projection method (\texttt{isotonize=TRUE} in the \texttt{SGP} R package) to address quantile crossing. We demonstrate this approach contains a fundamental methodological inconsistency: interpolation between independently-estimated, potentially crossed quantiles requires monotonicity, yet the post-hoc correction alters estimates in ways that may violate the quantile property $P(Y \leq \hat{Q}_{\tau}(Y|X) \mid X) = \tau$. We term this the \emph{interpolation paradox}. While theoretically sound constrained joint quantile regression (CJQR) eliminates crossing by enforcing non-crossing constraints during optimization, we analyze its computational complexity (often scaling poorly, e.g., $\mathcal{O}((qn)^3)$ for standard LP solvers) rendering it intractable for large-scale educational data ($n > 100{,}000$). We examine the SGP package's switch to the Frisch-Newton interior point method (\texttt{rq.method.for.large.n="fn"}) for large $N$, noting that while efficient for \emph{independent} QR, it doesn't resolve the joint problem's complexity or the paradox. We propose neural network-based multi-quantile regression (NNQR) with shared hidden layers as a practical alternative. Leveraging the convexity of the composite pinball loss, SGD-based optimization used in NN training can reliably approach the global optimum, offering scalability ($O(n)$) and implicitly reducing crossing. Our empirical analysis shows independent QR yields crossing, while both CJQR and NNQR enforce monotonicity. NNQR emerges as a viable, scalable alternative for operational SGP systems, aligning theoretical validity with computational feasibility.
翻译:暂无翻译