A partially identified model, where the parameters can not be uniquely identified, often arises during statistical analysis. While researchers frequently use Bayesian inference to analyze the models, when Bayesian inference with an off-the-shelf MCMC sampling algorithm is applied to a partially identified model, the computational performance can be poor. It is found that using importance sampling with transparent reparameterization (TP) is one remedy. This method is preferable since the model is known to be rendered as identified with respect to the new parameterization, and at the same time, it may allow faster, i.i.d. Monte Carlo sampling by using conjugate convenience priors. In this paper, we explain the importance sampling method with the TP and a pseudo-TP. We introduce the pseudo-TP, an alternative to TP, since finding a TP is sometimes difficult. Then, we test the methods' performance in some scenarios and compare it to the performance of the off-the-shelf MCMC method - Gibbs sampling - applied in the original parameterization. While the importance sampling with TP (ISTP) shows generally better results than off-the-shelf MCMC methods, as seen in the compute time and trace plots, it is also seen that finding a TP which is necessary for the method may not be easy. On the other hand, the pseudo-TP method shows a mixed result and room for improvement since it relies on an approximation, which may not be adequate for a given model and dataset.
翻译:暂无翻译