We introduce a heuristic to test the significance of fit of Self-Validated Ensemble Models (SVEM) against the null hypothesis of a constant response. A SVEM model averages predictions from nBoot fits of a model, applied to fractionally weighted bootstraps of the target dataset. It tunes each fit on a validation copy of the training data, utilizing anti-correlated weights for training and validation. The proposed test computes SVEM predictions centered by the response column mean and normalized by the ensemble variability at each of nPoint points spaced throughout the factor space. A reference distribution is constructed by refitting the SVEM model to nPerm randomized permutations of the response column and recording the corresponding standardized predictions at the nPoint points. A reduced-rank singular value decomposition applied to the centered and scaled nPerm x nPoint reference matrix is used to calculate the Mahalanobis distance for each of the nPerm permutation results as well as the jackknife (holdout) Mahalanobis distance of the original response column. The process is repeated independently for each response in the experiment, producing a joint graphical summary. We present a simulation driven power analysis and discuss limitations of the test relating to model flexibility and design adequacy. The test maintains the nominal Type I error rate even when the base SVEM model contains more parameters than observations.
翻译:暂无翻译