We introduce a new cross-validation method based on an equicorrelated Gaussian randomization scheme. The method is well-suited for problems where sample splitting is infeasible, such as when data violate the assumption of independent and identical distribution. Even when sample splitting is possible, our method offers a computationally efficient alternative for estimating the prediction error, achieving comparable or even lower error than standard cross-validation in a few train-test repetitions. Drawing inspiration from recent techniques like data-fission and data-thinning, our method constructs train-test data pairs using externally generated Gaussian randomization variables. The key innovation lies in a carefully designed correlation structure among the randomization variables, which we refer to as antithetic Gaussian randomization. In theory, we show that this correlation is crucial in ensuring that the variance of our estimator remains bounded while allowing the bias to vanish. Through simulations on various data types and loss functions, we highlight the advantages of our antithetic Gaussian randomization scheme over both independent randomization and standard cross-validation, where the bias-variance tradeoff depends heavily on the number of folds.
翻译:暂无翻译