Testing for independence between two random vectors is a fundamental problem in statistics. It is observed from empirical studies that many existing omnibus consistent tests may not work well for some strongly nonmonotonic and nonlinear relationships. To explore the reasons behind this issue, we novelly transform the multivariate independence testing problem equivalently into checking the equality of two bivariate means. An important observation we made is that the power loss is mainly due to cancellation of positive and negative terms in dependence metrics, making them very close to zero. Motivated by this observation, we propose a class of consistent metrics with a positive integer $\gamma$ that exactly characterize independence. Theoretically, we show that the metrics with even and infinity $\gamma$ can effectively avoid the cancellation, and have high powers under the alternatives that two mean differences offset each other. Since we target at a wide range of dependence scenarios in practice, we further suggest to combine the p-values of test statistics with different $\gamma$'s through the Fisher's method. We illustrate the advantages of our proposed tests through extensive numerical studies.
翻译:暂无翻译