Statistical inference for sparse covariance matrices is crucial to reveal dependence structure of large multivariate data sets, but lacks scalable and theoretically supported Bayesian methods. In this paper, we propose beta-mixture shrinkage prior, computationally more efficient than the spike and slab prior, for sparse covariance matrices and establish its minimax optimality in high-dimensional settings. The proposed prior consists of beta-mixture shrinkage and gamma priors for off-diagonal and diagonal entries, respectively. To ensure positive definiteness of the resulting covariance matrix, we further restrict the support of the prior to a subspace of positive definite matrices. We obtain the posterior convergence rate of the induced posterior under the Frobenius norm and establish a minimax lower bound for sparse covariance matrices. The class of sparse covariance matrices for the minimax lower bound considered in this paper is controlled by the number of nonzero off-diagonal elements and has more intuitive appeal than those appeared in the literature. The obtained posterior convergence rate coincides with the minimax lower bound unless the true covariance matrix is extremely sparse. In the simulation study, we show that the proposed method is computationally more efficient than competitors, while achieving comparable performance. Advantages of the shrinkage prior are demonstrated based on two real data sets.
翻译:稀少的共变矩阵的统计推论对于揭示大型多变量数据集的依赖结构至关重要,但缺乏可缩放和理论上支持的巴耶斯人方法。 在本文中,我们建议,对于稀释的共变矩阵来说,在稀释的共变矩阵方面,采用比前加压和平滑效率更高的计算方法,并确立其在高维环境中的微缩最大最佳性。提议的前一个部分分别包括双对角和对角条目的β混合缩进和伽马先导。为了确保由此产生的共变矩阵的确定性,我们进一步限制对前一个正确定矩阵子空间的支持。我们建议,在Frobenius规范下,我们获得诱导的远相混合缩缩微缩缩缩缩,并建立一个稀释共变矩阵的低缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩缩略微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微,由非零微缩微缩微缩微缩微缩微缩微缩微缩微,由非零微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微,由非缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微,由非缩微缩微缩微,由非缩微缩微缩微,由的缩微,由非色的缩微缩微缩微缩微缩微缩微缩微缩微缩微缩微,由的缩微缩微缩微缩微缩微,