We consider the estimation problem in high-dimensional semi-supervised learning. Our goal is to investigate when and how the unlabeled data can be exploited to improve the estimation of the regression parameters of linear model in light of the fact that such linear models may be misspecified in data analysis. We first establish the minimax lower bound for parameter estimation in the semi-supervised setting, and show that this lower bound cannot be achieved by supervised estimators using the labeled data only. We propose an optimal semi-supervised estimator that can attain this lower bound and therefore improves the supervised estimators, provided that the conditional mean function can be consistently estimated with a proper rate. We further propose a safe semi-supervised estimator. We view it safe, because this estimator is always at least as good as the supervised estimators. We also extend our idea to the aggregation of multiple semi-supervised estimators caused by different misspecifications of the conditional mean function. Extensive numerical simulations and a real data analysis are conducted to illustrate our theoretical results.
翻译:我们考虑高维半监督学习中的估计问题。我们的目标是探讨在何时以及如何利用未标记数据来改善线性模型的回归参数的估计,因为这种线性模型在数据分析中可能存在规格不正确的情况。我们首先建立起半监督设置下参数估计的最小极小下限,并表明仅利用标记数据的有监督估计器无法实现此下限。我们提出了一种能够达到此下限并因此改善了有监督估计器的最优半监督估计器,前提是条件均值函数可以以合适的速率一致地估计出来。我们进一步提出了一种安全的半监督估计器。我们认为它是安全的,因为此估计器始终至少与有监督估计器相同好。我们还将我们的思想扩展到导致由不同的条件均值函数不正确而引起的多个半监督估计器的聚合。进行了广泛的数值模拟和实际数据分析来说明我们的理论结果。