In this paper, we propose a uniformly dithered 1-bit quantization scheme for high-dimensional statistical estimation. The scheme contains truncation, dithering, and quantization as typical steps. As canonical examples, the quantization scheme is applied to the estimation problems of sparse covariance matrix estimation, sparse linear regression (i.e., compressed sensing), and matrix completion. We study both sub-Gaussian and heavy-tailed regimes, where the underlying distribution of heavy-tailed data is assumed to have bounded moments of some order. We propose new estimators based on 1-bit quantized data. In sub-Gaussian regime, our estimators achieve near minimax rates, indicating that our quantization scheme costs very little. In heavy-tailed regime, while the rates of our estimators become essentially slower, these results are either the first ones in an 1-bit quantized and heavy-tailed setting, or already improve on existing comparable results from some respect. Under the observations in our setting, the rates are almost tight in compressed sensing and matrix completion. Our 1-bit compressed sensing results feature general sensing vector that is sub-Gaussian or even heavy-tailed. We also first investigate a novel setting where both the covariate and response are quantized. In addition, our approach to 1-bit matrix completion does not rely on likelihood and represent the first method robust to pre-quantization noise with unknown distribution. Experimental results on synthetic data are presented to support our theoretical analysis.
翻译:在本文中,我们提出一个统一分散的1位位数的高度统计估算方法。这个方法包含以1位数的量化数据为基础的新估算器,以1位数的量化数据为典型步骤。作为典型步骤的典型步骤,我们提出一个统一分散的1位位数的量化办法。作为典型的例子,我们的量化办法适用于稀疏的共差矩阵估算、稀疏线性回归(即压缩感测)和矩阵完成等估算问题。我们研究的是亚高加索和重成型两种制度,其中重成型数据的基本分布假定在某些时刻是受约束的。我们提出了新的基于1位数的量化数据估算器。在亚撒亚撒亚亚亚地区,我们的估测算器在接近微缩增速率,表明我们的量化办法的成本非常小。在重成型体系中,虽然我们的估算器的速率基本变慢,但这些结果要么是在1位数前的四位数和大量成型的高度数据配置,或者已经改进了某些可比较的结果。在我们设定的观察中,最接近于第一位的压缩感测和最接近于1位数级的离子的对1号的深度数据分析结果。