Sparse reduced rank regression is an essential statistical learning method. In the contemporary literature, estimation is typically formulated as a nonconvex optimization that often yields to a local optimum in numerical computation. Yet, their theoretical analysis is always centered on the global optimum, resulting in a discrepancy between the statistical guarantee and the numerical computation. In this research, we offer a new algorithm to address the problem and establish an almost optimal rate for the algorithmic solution. We also demonstrate that the algorithm achieves the estimation with a polynomial number of iterations. In addition, we present a generalized information criterion to simultaneously ensure the consistency of support set recovery and rank estimation. Under the proposed criterion, we show that our algorithm can achieve the oracle reduced rank estimation with a significant probability. The numerical studies and an application in the ovarian cancer genetic data demonstrate the effectiveness and scalability of our approach.
翻译:在当代文献中,估算通常是一种非convex优化,通常在数字计算中达到当地最佳。然而,它们的理论分析总是以全球最佳为中心,导致统计保障与数字计算之间的差异。在这项研究中,我们提供了一种新的算法,以解决问题,并为算法解决方案制定几乎最佳的速率。我们还表明,算法以多倍迭代数实现估算。此外,我们提出了一个通用信息标准,以确保支持集集的恢复和等级估计的一致性。在拟议标准中,我们表明我们的算法可以有很大的概率达到降级估计。卵巢癌遗传数据的数字研究和应用显示了我们方法的有效性和可伸缩性。