Power-expected-posterior (PEP) methodology, which borrows ideas from the literature on power priors, expected-posterior priors and unit information priors, provides a systematic way to construct objective priors. The basic idea is to use imaginary training samples to update a noninformative prior into a minimally-informative prior. In this work, we develop a novel definition of PEP priors for generalized linear models that relies on a Laplace expansion of the likelihood of the imaginary training sample. This approach has various computational, practical and theoretical advantages over previous proposals for non-informative priors for generalized linear models. We place a special emphasis on logistic regression models, where sample separation presents particular challenges to alternative methodologies. We investigate both asymptotic and finite-sample properties of the procedures, showing that is both asymptotic and intrinsic consistent, and that its performance is at least competitive and, in some settings, superior to that of alternative approaches in the literature.
翻译:在这项工作中,我们为通用线性模型制定了一个新的PEP前科定义,该前科方法借鉴了关于权力前科、预期前科和单位信息前科的文献,为构建客观的前科提供了系统的方法。基本想法是使用想象式培训样本来更新一个非信息规范,先进行最低限度的信息规范。在这项工作中,我们为通用线性模型制定了一个新的PEP前科定义,该前科模型依赖于扩大假想培训样本的可能性。这一方法在计算、实际和理论上优于以往关于通用线性模型非信息前科的建议。我们特别强调了物流回归模型,其中样本分离对替代方法提出了特殊挑战。我们调查了程序的无症状和有限抽样特性,表明其表现既无症状,又具有内在一致性,而且至少具有竞争力,在某些环境下,其表现优于文献中替代方法。