We study high-dimensional Bayesian linear regression with product priors. Using the nascent theory of non-linear large deviations (Chatterjee and Dembo,2016), we derive sufficient conditions for the leading-order correctness of the naive mean-field approximation to the log-normalizing constant of the posterior distribution. Subsequently, assuming a true linear model for the observed data, we derive a limiting infinite dimensional variational formula for the log normalizing constant of the posterior. Furthermore, we establish that under an additional "separation" condition, the variational problem has a unique optimizer, and this optimizer governs the probabilistic properties of the posterior distribution. We provide intuitive sufficient conditions for the validity of this "separation" condition. Finally, we illustrate our results on concrete examples with specific design matrices.
翻译:我们利用非线性大偏差的新生理论(Chatterjee 和 Dembo, 2016年), 得出足够的条件, 使天真的平均场近似值与后方分布的日志正正统常数相符合, 从而得出一个真正的线性模型, 假设观察数据为真实的线性模型, 我们为后方的日志常数的正正统性得出一个无限的无限的多元变异公式。 此外, 我们确认, 在附加的“ 分离” 条件下, 变异问题有一个独特的优化器, 这个优化器能调节后方分布的概率特性。 我们为这个“ 分离” 条件的有效性提供了不直观的足够条件。 最后, 我们用具体的设计矩阵来展示我们的具体例子。