We consider the problem of finding tuned regularized parameter estimators for linear models. We start by showing that three known optimal linear estimators belong to a wider class of estimators that can be formulated as a solution to a weighted and constrained minimization problem. The optimal weights, however, are typically unknown in many applications. This begs the question, how should we choose the weights using only the data? We propose using the covariance fitting SPICE-methodology to obtain data-adaptive weights and show that the resulting class of estimators yields tuned versions of known regularized estimators - such as ridge regression, LASSO, and regularized least absolute deviation. These theoretical results unify several important estimators under a common umbrella. The resulting tuned estimators are also shown to be practically relevant by means of a number of numerical examples.
翻译:我们考虑的是为线性模型寻找经调整的正常参数估计值的问题。 我们首先要表明, 3个已知的最佳线性估计值属于更广泛的估计值类别, 可以作为加权和受限制最小化问题的解决方案。 然而, 最佳加权值在许多应用中通常并不为人所知。 这就引出了这样一个问题: 我们应如何只使用数据来选择加权数? 我们提议使用同差相配的SPICE方法来获取数据适应性加权数, 并显示由此形成的估计值类别产生已知的正常估计值的经调整的版本, 如山脊回归、 LASSO 和正规化最低绝对偏差。 这些理论结果将几个重要的估计值统一在一个共同的伞下。 由此产生的经调整的估算值也通过数字实例显示实际相关。