Overfitting in linear regression is broken down into two main causes. First, the formula for the estimator includes 'forbidden knowledge' about training observations' residuals, and it loses this advantage when deployed out-of-sample. Second, the estimator has 'specialized training' that makes it particularly capable of explaining movements in the predictors that are idiosyncratic to the training sample. An out-of-sample counterpart is introduced to the popular 'leverage' measure of training observations' importance. A new method is proposed to forecast out-of-sample fit at the time of deployment, when the values for the predictors are known but the true outcome variable is not. In Monte Carlo simulations and in an empirical application using MRI brain scans, the proposed estimator performs comparably to Predicted Residual Error Sum of Squares (PRESS) for the average out-of-sample case and unlike PRESS, also performs consistently across different test samples, even those that differ substantially from the training set.
翻译:线性回归的超称分解为两大原因。 首先, 估计值公式包括培训观察剩余部分的“ 禁止知识”, 并且在部署时失去这种优势。 第二, 估计值的“ 专门培训” 使其特别能够解释预测器中与培训样本具有独特性的变化。 引入了流行的“ 利用” 培训观测“ 测量“ ” 的重要性 。 提出了一种新的方法, 在部署时预测值为已知但真正结果变量不为人知的情况下, 预测出样, 从而在部署时会失去这种优势。 在蒙特卡洛的模拟中, 以及在使用 MRI 脑扫描的经验应用中, 拟议的估计值与平均抽样“ 利用” 和“ 打印” 不同, 也在不同测试样本中一致地显示, 即使那些与培训内容大不相同的样本。