Empirical Bayes (EB) is a popular framework for large-scale inference that aims to find data-driven estimators to compete with the Bayesian oracle that knows the true prior. Two principled approaches to EB estimation have emerged over the years: $f$-modeling, which constructs an approximate Bayes rule by estimating the marginal distribution of the data, and $g$-modeling, which estimates the prior from data and then applies the learned Bayes rule. For the Poisson model, the prototypical examples are the celebrated Robbins estimator and the nonparametric MLE (NPMLE), respectively. It has long been recognized in practice that the Robbins estimator, while being conceptually appealing and computationally simple, lacks robustness and can be easily derailed by ``outliers'', unlike the NPMLE which provides more stable and interpretable fit thanks to its Bayes form. On the other hand, not only do the existing theories shed little light on this phenomenon, but they all point to the opposite, as both methods have recently been shown optimal in terms of regret (excess over the Bayes risk) for compactly supported and subexponential priors. In this paper we provide a theoretical justification for the superiority of $g$-modeling over $f$-modeling for heavy-tailed data by considering priors with bounded $p>1$th moment. We show that with mild regularization, any $g$-modeling method that is Hellinger rate-optimal in density estimation achieves an optimal total regret $\tilde \Theta(n^{\frac{3}{2p+1}})$; in particular, the special case of NPMLE succeeds without regularization. In contrast, there exists an $f$-modeling estimator whose density estimation rate is optimal but whose EB regret is suboptimal by a polynomial factor. These results show that the proper Bayes form provides a ``general recipe of success'' for optimal EB estimation that applies to all $g$-modeling (but not $f$-modeling) methods.
翻译:暂无翻译