This work is about recovering an analysis-sparse vector, i.e. sparse vector in some transform domain, from under-sampled measurements. In real-world applications, there often exist random analysis-sparse vectors whose distribution in the analysis domain are known. To exploit this information, a weighted $\ell_1$ analysis minimization is often considered. The task of choosing the weights in this case is however challenging and non-trivial. In this work, we provide an analytical method to choose the suitable weights. Specifically, we first obtain a tight upper-bound expression for the expected number of required measurements. This bound depends on two critical parameters: support distribution and expected sign of the analysis domain which are both accessible in advance. Then, we calculate the near-optimal weights by minimizing this expression with respect to the weights. Our strategy works for both noiseless and noisy settings. Numerical results demonstrate the superiority of our proposed method. Specifically, the weighted $\ell_1$ analysis minimization with our near-optimal weighting design considerably needs fewer measurements than its regular $\ell_1$ analysis counterpart.
翻译:这项工作涉及从抽样不足的测量中从一个分析偏差矢量(即某些变异区域中的稀释矢量)中回收分析偏差矢量。 在现实世界的应用中,往往存在随机分析偏差矢量,这些矢量在分析域的分布是已知的。 为了利用这一信息,通常会考虑加权 $\ ell_ 1$ 分析最小化。 选择本案的加权数是一项具有挑战性和非三重性的任务。 在这项工作中, 我们提供了一种分析方法来选择合适的加权数。 具体地说, 我们首先对预期的所需测量数获得严格的上下限表达式。 这取决于两个关键参数: 支持分布和预期的分析偏差的标记, 两者都是可以事先获得的。 然后, 我们计算接近最佳的加权数, 方法是在重量方面将这一表达式最小化。 我们的战略适用于无噪音的环境和噪音的环境。 数字结果显示了我们拟议方法的优越性。 具体地说, 加权 $\ $_ 1$ 1% 分析最小化, 与我们接近最佳的加权的加权设计需要大大少于正常的 $\ ell_ 1$ 1$ 分析对应。