A common way of characterizing minimax estimators in point estimation is by moving the problem into the Bayesian estimation domain and finding a least favorable prior distribution. The Bayesian estimator induced by a least favorable prior, under mild conditions, is then known to be minimax. However, finding least favorable distributions can be challenging due to inherent optimization over the space of probability distributions, which is infinite-dimensional. This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension. The benefit of this dimensionality reduction is that it permits the use of popular algorithms such as projected gradient ascent to find least favorable priors. Throughout the paper, in order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences.
翻译:将问题移入巴伊西亚估算领域并找出最不有利的先前分布方式,是确定微小最大估计值的常见方法。 Baysian 估计值在温和条件下由最不有利的先前条件引致的比亚斯估计值是最小的。 然而,由于概率分布空间的内在优化,也就是无限的维度,因此找到最不有利的分布方法可能具有挑战性。本文开发了一种减少维度的方法,使我们能够将优化移到一个对维度有明确约束的有限维度设置。这种维度减少的好处是允许使用流行的算法,如预测梯度为最不有利的前程,以找到最不有利的前程。为了在问题上取得进展,我们在整个文件中将自己局限于由相对大的损失功能(即布雷格曼差异)引起的巴伊斯风险。