We consider a broad class of permutation invariant statistical problems by extending the standard decision theoretic definition to allow also selective inference tasks, where the target is specified only after seeing the data. For any such problem we show that, among all permutation invariant procedures, the minimizer of the risk at $\boldsymbol{\theta}$ is precisely the rule that minimizes the Bayes risk under a (postulated) discrete prior assigning equal probability to every permutation of $\boldsymbol{\theta}$. This gives an explicit characterization of the greatest lower bound on the risk of every sensible procedure in a wide range of problems. Furthermore, in a permutation invariant problem of estimating the parameter of a selected population under squared loss, we prove that this lower bound coincides asymptotically with a simpler lower bound, attained by the Bayes solution that replaces the aforementioned uniform prior on all permutations of $\boldsymbol{\theta}$ by the i.i.d. prior with the same marginals. This has important algorithmic implications because it suggests that our greatest lower bound is asymptotically attainable uniformly in $\boldsymbol{\theta}$ by an empirical Bayes procedure. Altogether, the above extends theory that has been established in the existing literature only for the very special case of compound decision problems.
翻译:暂无翻译