This paper focuses on parameter estimation and introduces a new method for lower bounding the Bayesian risk. The method allows for the use of virtually \emph{any} information measure, including R\'enyi's $\alpha$, $\varphi$-Divergences, and Sibson's $\alpha$-Mutual Information. The approach considers divergences as functionals of measures and exploits the duality between spaces of measures and spaces of functions. In particular, we show that one can lower bound the risk with any information measure by upper bounding its dual via Markov's inequality. We are thus able to provide estimator-independent impossibility results thanks to the Data-Processing Inequalities that divergences satisfy. The results are then applied to settings of interest involving both discrete and continuous parameters, including the ``Hide-and-Seek'' problem, and compared to the state-of-the-art techniques. An important observation is that the behaviour of the lower bound in the number of samples is influenced by the choice of the information measure. We leverage this by introducing a new divergence inspired by the ``Hockey-Stick'' Divergence, which is demonstrated empirically to provide the largest lower-bound across all considered settings. If the observations are subject to privatisation, stronger impossibility results can be obtained via Strong Data-Processing Inequalities. The paper also discusses some generalisations and alternative directions.
翻译:这篇论文主要关注参数估计,引入了一种新的贝叶斯风险下界方法。这种方法可以利用几乎\emph{任何}信息度量,包括R\'enyi的 $\alpha$、$\varphi$-分布和Sibson的 $\alpha$-互信息。该方法将分布看作测度的泛函并利用测度空间和函数空间之间的对偶性。特别地,我们通过马尔可夫不等式上界化其对偶的方法来利用任意信息度量下界风险。因此,我们可以通过信息度量的数据处理不等式提供与估计器无关的不可能结果。结果被应用于离散和连续参数的相关设置,包括“躲猫猫”问题,并与最先进的技术进行比较。一个重要的观察结果是,下界的行为在样本数方面受到信息度量选择的影响。我们利用这一点引入了一种新的受“Hockey Stick” Divergence启发的差异,经验上证明在所有考虑的设置中提供了最大的下界。如果观察结果受到个性化保护,可以通过强数据处理不等式获得更强大的不可能结果。该论文还讨论了一些一般化和其他方向。