In this paper, we propose a uniformly dithered one-bit quantization scheme for high-dimensional statistical estimation. The scheme contains truncation, dithering, and quantization as typical steps. As canonical examples, the quantization scheme is applied to three estimation problems: sparse covariance matrix estimation, sparse linear regression, and matrix completion. We study both sub-Gaussian and heavy-tailed regimes, with the underlying distribution of heavy-tailed data assumed to possess bounded second or fourth moment. For each model we propose new estimators based on one-bit quantized data. In sub-Gaussian regime, our estimators achieve optimal minimax rates up to logarithmic factors, which indicates that our quantization scheme nearly introduces no additional cost. In heavy-tailed regime, while the rates of our estimators become essentially slower, these results are either the first ones in such one-bit quantized and heavy-tailed setting, or exhibit significant improvements over existing comparable results. Moreover, we contribute considerably to the problems of one-bit compressed sensing and one-bit matrix completion. Specifically, we extend one-bit compressed sensing to sub-Gaussian or even heavy-tailed sensing vectors via convex programming. For one-bit matrix completion, our method is essentially different from the standard likelihood approach and can handle pre-quantization random noise with unknown distribution. Experimental results on synthetic data are presented to support our theoretical analysis.
翻译:在本文中,我们为高维统计估计提出了一个统一差幅一位数的一位数计算办法。 对于每个模型,我们建议根据一位数的量化数据进行新的估算。在亚加西制度下,我们的估算数在对数因素上达到最佳微缩率,这表明我们的量化办法几乎不会带来额外费用。在重尾制度下,我们的估算数比率基本变慢,但这些结果要么是一位数四分制和重尾定定制中的第一个结果,要么是现有可比结果的显著改进。此外,我们在亚加西制度下,我们的估算数在对数因素上达到最佳微缩速率,这表明我们的量化办法几乎不会产生额外费用。在重尾制度下,我们的估算数率基本变慢,但这些结果要么是一位数四分制和重尾裁制数据的基本分布法,要么是一位数级或一位数级的对等量制方法。我们从一个比位数的精确度的精确度分析到一个或一位数级数级程的精确度计算方法,具体地说,我们把一个比级计算法的方法扩大到一个比级计算。