A core strength of knockoff methods is their virtually limitless customizability, allowing an analyst to exploit machine learning algorithms and domain knowledge without threatening the method's robust finite-sample false discovery rate control guarantee. While several previous works have investigated regimes where specific implementations of knockoffs are provably powerful, general negative results are more difficult to obtain for such a flexible method. In this work we recast the fixed-$X$ knockoff filter for the Gaussian linear model as a conditional post-selection inference method. It adds user-generated Gaussian noise to the ordinary least squares estimator $\hat\beta$ to obtain a "whitened" estimator $\widetilde\beta$ with uncorrelated entries, and performs inference using $\text{sgn}(\widetilde\beta_j)$ as the test statistic for $H_j:\; \beta_j = 0$. We prove equivalence between our whitening formulation and the more standard formulation involving negative control predictor variables, showing how the fixed-$X$ knockoffs framework can be used for multiple testing on any problem with (asymptotically) multivariate Gaussian parameter estimates. Relying on this perspective, we obtain the first negative results that universally upper-bound the power of all fixed-$X$ knockoff methods, without regard to choices made by the analyst. Our results show roughly that, if the leading eigenvalues of $\text{Var}(\hat\beta)$ are large with dense leading eigenvectors, then there is no way to whiten $\hat\beta$ without irreparably erasing nearly all of the signal, rendering $\text{sgn}(\widetilde\beta_j)$ too uninformative for accurate inference. We give conditions under which the true positive rate (TPR) for any fixed-$X$ knockoff method must converge to zero even while the TPR of Bonferroni-corrected multiple testing tends to one, and we explore several examples illustrating this phenomenon.
翻译:入门方法的核心力量是其几乎无限的自定义性, 使分析师能够利用机器学习算法和域知识, 而不会威胁到该方法的稳健的有限代表式虚假发现率控制保证。 虽然前几部作品已经调查了具体实施入门方法非常强大的制度, 但对于这种灵活的方法来说,一般的负面结果更难以获得。 在这项工作中, 我们重新将Gausian 线性模型的固定- X$的入门过滤过滤器作为有条件的选后自选后自定义方法。 它让用户生成的高斯噪音添加到普通的最小方形估测价 $( hat\ bet\beta$ 美元 ) 中, 我们的白价配方和包含负控量预测值的更标准方形配方 $( helectived_bretailate$) 以非crealtial expressional expressionals, 如何使用固定式的 explical- resulate exal exations the latial a latial latial latial latial exal exal exal lade lating lade lating the the the the the the ex ex a ex the ex the lade the laut the ex the ex the laut the lat the trup ex a ex ex ex ex exmlus a lat the the the the the the latal a latal a lating the the the ex ex ex ex ex ex ex ex ex a exs a lader a lader a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a lader a ows a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex a ex ex ex a ex ex ex a ex a ex a ex