Compressed sensing has been a very successful high-dimensional signal acquisition and recovery technique that relies on linear operations. However, the actual measurements of signals have to be quantized before storing or processing. 1(One)-bit compressed sensing is a heavily quantized version of compressed sensing, where each linear measurement of a signal is reduced to just one bit: the sign of the measurement. Once enough of such measurements are collected, the recovery problem in 1-bit compressed sensing aims to find the original signal with as much accuracy as possible. The recovery problem is related to the traditional "halfspace-learning" problem in learning theory. For recovery of sparse vectors, a popular reconstruction method from 1-bit measurements is the binary iterative hard thresholding (BIHT) algorithm. The algorithm is a simple projected sub-gradient descent method, and is known to converge well empirically, despite the nonconvexity of the problem. The convergence property of BIHT was not theoretically justified, except with an exorbitantly large number of measurements (i.e., a number of measurement greater than $\max\{k^{10}, 24^{48}, k^{3.5}/\epsilon\}$, where $k$ is the sparsity, $\epsilon$ denotes the approximation error, and even this expression hides other factors). In this paper we show that the BIHT algorithm converges with only $\tilde{O}(\frac{k}{\epsilon})$ measurements. Note that, this dependence on $k$ and $\epsilon$ is optimal for any recovery method in 1-bit compressed sensing. With this result, to the best of our knowledge, BIHT is the only practical and efficient (polynomial time) algorithm that requires the optimal number of measurements in all parameters (both $k$ and $\epsilon$). This is also an example of a gradient descent algorithm converging to the correct solution for a nonconvex problem, under suitable structural conditions.
翻译:压缩感应是一个非常成功的高维信号获取和恢复技术,它依赖于线性操作。然而,对信号的实际测量在存储或处理前必须量化。 1( 1) 比位压缩是压缩感测的高度量化版本, 每种信号的线性测量都降为一小步: 测量信号的标志。 一旦收集到足够多, 1位的压缩感测的恢复问题就是为了找到最初的信号, 尽可能精确。 恢复问题与传统的学习理论中的“ 半空间参数学习” 问题有关。 对于稀释矢量的恢复来说, 1位比值测量的流行重建方法是双迭代硬阈值( BHT ) 。 算法是一个简单的预测子梯度下降方法, 并且人们知道, 尽管问题不精确。 BHT 的趋同属性在理论上是没有道理, 除非测量数量过高( 例如, 所有测量次数比 $maxx 10) 。