Inferring the means in the multivariate normal model $X \sim N_n(\theta, I)$ with unknown mean vector $\theta=(\theta_1,...,\theta_n)' \in \mathbb{R}^n$ and observed data $X=(X_1,...,X_n)'\in {\mathbb R}^n$ is a challenging task, known as the problem of many normal means (MNMs). This paper tackles two fundamental kinds of MNMs within the framework of Inferential Models (IMs). The first kind, referred to as the {\it classic} kind, is presented as is. The second kind, referred to as the {\it empirical Bayes} kind, assumes that the individual means $\theta_i$'s are drawn independently {\it a priori} from an unknown distribution $G(.)$. The IM formulation for the empirical Bayes kind utilizes numerical deconvolution, enabling prior-free probabilistic inference with over-parameterization for $G(.)$. The IM formulation for the classic kind, on the other hand, utilizes a latent random permutation, providing a novel approach for reasoning with uncertainty and deeper understanding. For uncertainty quantification within the familiar frequentist inference framework, the IM method of maximum plausibility is used for point estimation. Conservative interval estimation is obtained based on plausibility, using a Monte Carlo-based adaptive adjustment approach to construct shorter confidence intervals with targeted coverage. These methods are demonstrated through simulation studies and a real-data example. The numerical results show that the proposed methods for point estimation outperform traditional James-Stein and Efron's $g$-modeling in terms of mean square error, and the adaptive intervals are satisfactory in both coverage and efficiency. The paper concludes with suggestions for future developments and extensions of the proposed methods.
翻译:暂无翻译