Many modern datasets, such as those in ecology and geology, are composed of samples with spatial structure and dependence. With such data violating the usual independent and identically distributed (IID) assumption in machine learning and classical statistics, it is unclear a priori how one should measure the performance and generalization of models. Several authors have empirically investigated cross-validation (CV) methods in this setting, reaching mixed conclusions. We provide a class of unbiased estimation methods for general quadratic errors, correlated Gaussian response, and arbitrary prediction function $g$, for a noise-elevated version of the error. Our approach generalizes the coupled bootstrap (CB) from the normal means problem to general normal data, allowing correlation both within and between the training and test sets. CB relies on creating bootstrap samples that are intelligently decoupled, in the sense of being statistically independent. Specifically, the key to CB lies in generating two independent "views" of our data and using them as stand-ins for the usual independent training and test samples. Beginning with Mallows' $C_p$, we generalize the estimator to develop our generalized $C_p$ estimators (GC). We show at under only a moment condition on $g$, this noise-elevated error estimate converges smoothly to the noiseless error estimate. We show that when Stein's unbiased risk estimator (SURE) applies, GC converges to SURE as in the normal means problem. Further, we use these same tools to analyze CV and provide some theoretical analysis to help understand when CV will provide good estimates of error. Simulations align with our theoretical results, demonstrating the effectiveness of GC and illustrating the behavior of CV methods. Lastly, we apply our estimator to a model selection task on geothermal data in Nevada.
翻译:暂无翻译