Despite the popularity of the false discovery rate (FDR) as an error control metric for large-scale multiple testing, its close Bayesian counterpart the local false discovery rate (lfdr), defined as the posterior probability that a particular null hypothesis is false, is a more directly relevant standard for justifying and interpreting individual rejections. However, the lfdr is difficult to work with in small samples, as the prior distribution is typically unknown. We propose a simple multiple testing procedure and prove that it controls the expectation of the maximum lfdr across all rejections; equivalently, it controls the probability that the rejection with the largest p-value is a false discovery. Our method operates without knowledge of the prior, assuming only that the p-value density is uniform under the null and decreasing under the alternative. We also show that our method asymptotically implements the oracle Bayes procedure for a weighted classification risk, optimally trading off between false positives and false negatives. We derive the limiting distribution of the attained maximum lfdr over the rejections, and the limiting empirical Bayes regret relative to the oracle procedure.
翻译:尽管假发现率(FDR)作为大规模多次测试的错误控制度量受到欢迎,但作为个别否定假设的事后概率(lfdr)被定义为特定无效假设的事后概率(lfdr)是更直接的相关标准,可以证明和解释个别拒绝。然而,Lifdr很难在小型样本中工作,因为先前的分布通常不为人知。我们提出了一个简单的多重测试程序,并证明它控制了所有拒绝者对最大失利率的预期;同样,它控制了以最大p价值拒绝的概率(lfdr)是虚假发现的可能性。我们的方法是在不知情的情况下操作的,假设在无效情况下的p-value密度是统一的,而在替代情况下则在下降。我们还表明,我们的方法是随机地执行甲板湾程序,进行加权分类风险,最佳地交换假正数和假负数之间的交易。我们得出了对最大负值的最大失利值的有限分配,并限制经验性贝耶斯对程序感到遗憾。