One important partition of algorithms for controlling the false discovery rate (FDR) in multiple testing is into offline and online algorithms. The first generally achieve significantly higher power of discovery, while the latter allow making decisions sequentially as well as adaptively formulating hypotheses based on past observations. Using existing methodology, it is unclear how one could trade off the benefits of these two broad families of algorithms, all the while preserving their formal FDR guarantees. To this end, we introduce $\text{Batch}_{\text{BH}}$ and $\text{Batch}_{\text{St-BH}}$, algorithms for controlling the FDR when a possibly infinite sequence of batches of hypotheses is tested by repeated application of one of the most widely used offline algorithms, the Benjamini-Hochberg (BH) method or Storey's improvement of the BH method. We show that our algorithms interpolate between existing online and offline methodology, thus trading off the best of both worlds.
翻译:在多次测试中控制虚假发现率(FDR)的一个重要的算法分割法是离线和在线算法。第一个算法一般会大大提高发现力,而后一个算法允许根据以往的观察,按顺序作出决定以及根据适应性地拟订假设。利用现有方法,尚不清楚如何在保留其正式FDR保证的同时,交换这两个广泛的算法组合的好处。为此,我们引入了$\text{Batch{text{BH}$和$\text{Batch{Text{St-B}$,当通过反复应用最广泛使用的离线算法、Benjami-Hochberg(BH)方法或Storrey对BH方法的改进来测试可能无限系列假设时,用于控制FDR的算法。我们展示了我们的算法在现有的在线方法和离线方法之间相互交叉,从而交换了两个世界的最佳方法。