When prior information is lacking, the go-to strategy for probabilistic inference is to combine a "default prior" and the likelihood via Bayes's theorem. Objective Bayes, (generalized) fiducial inference, etc. fall under this umbrella. This construction is natural, but the corresponding posterior distributions generally only offer limited, approximately valid uncertainty quantification. The present paper takes a reimagined approach offering posterior distributions with stronger reliability properties. The proposed construction starts with an inferential model (IM), one that takes the mathematical form of a data-driven possibility measure and features exactly valid uncertainty quantification, and then returns a so-called inner probabilistic approximation thereof. This inner probabilistic approximation inherits many of the original IM's desirable properties, including credible sets with exact coverage and asymptotic efficiency. The approximation also agrees with the familiar Bayes/fiducial solution obtained in applications where the model has a group transformation structure. A Monte Carlo method for evaluating the probabilistic approximation is presented, along with numerical illustrations.
翻译:暂无翻译