We propose a principal components regression method based on maximizing a joint pseudo-likelihood for responses and predictors. Our method uses both responses and predictors to select linear combinations of the predictors relevant for the regression, thereby addressing an oft-cited deficiency of conventional principal components regression. The proposed estimator is shown to be consistent in a wide range of settings, including ones with non-normal and dependent observations; conditions on the first and second moments suffice if the number of predictors ($p$) is fixed and the number of observations ($n$) tends to infinity and dependence is weak, while stronger distributional assumptions are needed when $p \to \infty$ with $n$. We obtain the estimator's asymptotic distribution as the projection of a multivariate normal random vector onto a tangent cone of the parameter set at the true parameter, and find the estimator is asymptotically more efficient than competing ones. In simulations our method is substantially more accurate than conventional principal components regression and compares favorably to partial least squares and predictor envelopes. The method is illustrated in a data example with cross-sectional prediction of stock returns.
翻译:我们建议了一种主要组成部分回归法,其基础是最大限度地扩大反应和预测者的共同假象。我们的方法是使用反应和预测器来选择与回归有关的预测者的线性组合,从而解决传统主要组成部分回归的反复偏差。我们提出的估计值显示在一系列广泛的环境中是一致的,包括非正常的和依赖性的观测;如果预测者的数量固定在第一和第二个时刻,而观测者的数量倾向于不精确和依赖性(美元),那么,我们的方法是弱小的,而当美元使用美元计算时,则需要更强有力的分布假设。我们获得了估计器的静态分布,因为我们预测了在真实参数设定的相向线上的多变量正常随机矢量的预测,发现估计值比竞争者的效率要高得多。在模拟我们的方法比常规主要组成部分的回归值要准确得多,并且比偏向部分最小的正方值和预测值信封的回报值要高得多。方法在数据中用交叉图解示了一个数据返回。