Consider the setting where there are B>1 candidate statistical models, and one is interested in model selection. Two common approaches to solve this problem are to select a single model or to combine the candidate models through model averaging. Instead, we select a subset of the combined parameter space associated with the models. Specifically, a model averaging perspective is used to increase the parameter space, and a model selection criterion is used to select a subset of this expanded parameter space. We account for the variability of the criterion by adapting Yekutieli (2012)'s method to Bayesian model averaging (BMA). Yekutieli (2012)'s method treats model selection as a truncation problem. We truncate the joint support of the data and the parameter space to only include small values of the covariance penalized error (CPE) criterion. The CPE is a general expression that contains several information criteria as special cases. Simulation results show that as long as the truncated set does not have near zero probability, we tend to obtain lower mean squared error than BMA. Additional theoretical results are provided that provide the foundation for these observations. We apply our approach to a dataset consisting of American Community Survey (ACS) period estimates to illustrate that this perspective can lead to improvements of a single model.
翻译:考虑 B > 1 候选统计模型的设置, 并对模型选择感兴趣。 解决这一问题的两种常见方法是选择单一模型, 或者通过平均模型将候选模型合并。 相反, 我们选择了与模型相关的合并参数空间的子集。 具体地说, 模型平均视角用于增加参数空间, 并且使用模型选择标准来选择这个扩大参数空间的子集。 我们通过将Yekutieli(2012) 的计算方法调整到巴伊西亚模型平均值( BMA) 来计算该标准的变异性。 Yekutieli (2012) 的方法将模型选择作为脱线问题处理。 我们对数据和参数空间的合并支持进行截断, 仅包含与该模型相适应错误( CPE) 标准的小值。 CPE 是包含数个信息标准的一般表达方式, 作为特殊情况。 模拟结果显示, 只要挂线集没有近于零概率, 我们往往获得比巴伊西亚模型( BMA) 较低的平均错误。 提供了更多的理论结果, 为这些观察提供基础。 我们用我们的方法来将数据和参数支持数据和参数空间空间只包括美共地测量的模型。