Low-rank multivariate regression (LRMR) is an important statistical learning model that combines highly correlated tasks as a multiresponse regression problem with low-rank priori on the coefficient matrix. In this paper, we study quantized LRMR, a practical setting where the responses and/or the covariates are discretized to finite precision. We focus on the estimation of the underlying coefficient matrix. To make consistent estimator that could achieve arbitrarily small error possible, we employ uniform quantization with random dithering, i.e., we add appropriate random noise to the data before quantization. Specifically, uniform dither and triangular dither are used for responses and covariates, respectively. Based on the quantized data, we propose the constrained Lasso and regularized Lasso estimators, and derive the non-asymptotic error bounds. With the aid of dithering, the estimators achieve minimax optimal rate, while quantization only slightly worsens the multiplicative factor in the error rate. Moreover, we extend our results to a low-rank regression model with matrix responses. We corroborate and demonstrate our theoretical results via simulations on synthetic data or image restoration.
翻译:低空多变量回归( LRMR) 是一个重要的统计学习模型, 将高度相关的任务作为多反应回归问题与系数矩阵中的低位前端先行问题结合起来。 在本文中, 我们研究四分法的 LRMR, 这是将反应和(或)共变量分解到有限精确度的实用环境。 我们侧重于对基本系数矩阵的估计。 要使用一致的估算器, 从而实现可能的任意小误差, 我们使用随机抖动, 也就是说, 我们在四分法前的数据中添加了适当的随机噪声。 具体地说, 对反应和共变量分别使用统一的对差和三角对差。 根据四分法数据, 我们提出限制的拉索和定序的激光测算器, 并得出非随机误差的界限。 在抖动的帮助下, 估测算器达到最小的峰值最佳速率, 而在误差率中, 裁分率只略地恶化了数据中的多复制系数 。 此外, 我们将结果推广到一个低级的回归模型, 用合成图像进行校验和模拟。