In this paper, we propose an abstract procedure for debiasing constrained or regularized potentially high-dimensional linear models. It is elementary to show that the proposed procedure can produce $\frac{1}{\sqrt{n}}$-confidence intervals for individual coordinates (or even bounded contrasts) in models with unknown covariance, provided that the covariance has bounded spectrum. While the proof of the statistical guarantees of our procedure is simple, its implementation requires more care due to the complexity of the optimization programs we need to solve. We spend the bulk of this paper giving examples in which the proposed algorithm can be implemented in practice. One fairly general class of instances which are amenable to applications of our procedure include convex constrained least squares. We are able to translate the procedure to an abstract algorithm over this class of models, and we give concrete examples where efficient polynomial time methods for debiasing exist. Those include the constrained version of LASSO, regression under monotone constraints, regression with positive monotone constraints and non-negative least squares. In addition, we show that our abstract procedure can be applied to efficiently debias SLOPE and square-root SLOPE, among other popular regularized procedures under certain assumptions. We provide thorough simulation results in support of our theoretical findings.
翻译:在本文中,我们提出一个抽象的程序,以降低受限制或正规化的潜在高维线性模型的偏差,我们提出一个抽象的程序,以降低受限制或正规化的潜在高维线性模型的偏差;首先,要表明拟议的程序能够产生一个相当一般的事例,适用于我们程序的应用,包括受限制的最小方块;我们能够将程序转换成关于这一类模型的抽象的算法,并举一些具体的例子,说明存在有效的多元时间偏差方法。这些例子包括LASSO的受限版本、单调限制下的倒退、带有正单调限制的倒退和非负最小方块。此外,我们展示了我们可适用于我们程序应用的一个相当一般的例子,包括受限制的最小方块方块。我们能够将程序转换成关于这一类模型的抽象算法,我们举出具体的例子,说明存在有效的多元时间方法来消除偏差。其中包括LASSO的受限版本、单调限制下的倒退、带有正单调限制的倒退以及非负最小方块。此外,我们还表明,我们的抽象程序可以适用于常规的、彻底的SLOPE和SLO的理论结果。