We interpret likelihood-based test functions from a geometric perspective where the Kullback-Leibler (KL) divergence is adopted to quantify the distance from a distribution to another. Such a test function can be seen as a sub-Gaussian random variable, and we propose a principled way to calculate its corresponding sub-Gaussian norm. Then an error bound for binary hypothesis testing can be obtained in terms of the sub-Gaussian norm and the KL divergence, which is more informative than Pinsker's bound when the significance level is prescribed. For $M$-ary hypothesis testing, we also derive an error bound which is complementary to Fano's inequality by being more informative when the number of hypotheses or the sample size is not large.
翻译:我们从几何角度来解释基于概率的测试函数,即使用 Kullback- Leiber (KL) 差异来量化从分布到另一个分布的距离。 这个测试函数可以被视为一个亚高加索随机变量, 我们提出一个原则性方法来计算相应的亚高加索规范。 然后, 可以用亚高加索规范和 KL 差异来获取二进假设测试的错误, 后者比规定重要等级时Pinsker 的界限更丰富。 对于美元假设测试, 我们还得出一个约束错误, 在假设数量或样本大小不大时, 以更多信息来补充Fano的不平等。