Although Bayesian variable selection methods have been intensively studied, their routine use in practice has not caught up with their non-Bayesian counterparts such as Lasso, likely due to difficulties in both computations and flexibilities of prior choices. To ease these challenges, we propose the neuronized priors to unify and extend some popular shrinkage priors, such as Laplace, Cauchy, horseshoe, and spike-and-slab priors. A neuronized prior can be written as the product of a Gaussian weight variable and a scale variable transformed from Gaussian via an activation function. Compared with classic spike-and-slab priors, the neuronized priors achieve the same explicit variable selection without employing any latent indicator variables, which results in both more efficient and flexible posterior sampling and more effective posterior modal estimation. Theoretically, we provide specific conditions on the neuronized formulation to achieve the optimal posterior contraction rate, and show that a broadly applicable MCMC algorithm achieves an exponentially fast convergence rate under the neuronized formulation. We also examine various simulated and real data examples and demonstrate that using the neuronization representation is computationally more or comparably efficient than its standard counterpart in all well-known cases. An R package NPrior is provided in the CRAN for using neuronized priors in Bayesian linear regression.
翻译:尽管对贝叶斯变量选择方法进行了深入的研究,但它们在实践中的常规使用并未赶上拉索等非巴伊色对应方,这可能是由于计算困难和以往选择的灵活性造成的。为缓解这些挑战,我们提议先用神经内分解前先将一些流行的缩缩前科(如Lapel、Cauchy、马蹄、以及尖刺和斜片前科)统一和扩大。先用神经内分泌前科可以写成一个高斯重量变量的产物和一个从高斯因激活功能而变异的尺度变量。与典型的螺旋加冰前科相比,神经内分泌前科可以在不使用任何潜在指标变量的情况下实现相同的明确变量选择,从而导致更有效和灵活的后科采样以及更有效的后科模型估计。理论上,我们为神经内分泌的配值配方提供了具体条件,并表明广泛应用的MCMC算法在神经内化配方下达到指数指数化的快速趋同率。我们还研究了各种模拟和真实的神经内核前置前置的神经内层数据模型,并用更精确的标准模型进行模拟的模拟和模拟式分析。